https://stackoverflow.com/questions/104612/run-mysqldump-without-locking-tables
https://dba.stackexchange.com/questions/17367/how-can-i-monitor-the-progress-of-an-import-of-a-large-sql-file
https://stackoverflow.com/questions/2016894/how-to-split-a-large-text-file-into-smaller-files-with-equal-number-of-lines
mysqldump --net_buffer_length=4096 --create-options --default-character-set="utf8" --host="localhost" --hex-blob --lock-tables --password --quote-names --user="myuser" "mydatabase" "mytable" > mytable.sql
https://stackoverflow.com/questions/5013151/how-do-i-limit-the-number-of-results-returned-from-grep
https://stackoverflow.com/questions/30658703/how-to-print-the-line-number-where-a-string-appears-in-a-file
https://askubuntu.com/questions/1026045/limit-grep-output-to-short-lines
35
I found what the problem was. The MySql variable/parameter explicit_defaults_for_timestamp was OFF on my local machine but ON on my remote machine.
I visited my AWS RDS Parameter Groups page and changed explicit_defaults_for_timestamp from 1 to 0. Then I went to my AWS RDS instances page to watch when "Parameter Group" changed from "Applying" to "pending-reboot". Then I rebooted the particular instance.
These links helped me:
https://stackoverflow.com/a/23392448/470749
How to import MySQL binlog that contains INSERTs of a TIMESTAMP field with default value CURRENT_TIMESTAMP
https://stackoverflow.com/questions/18264942/how-to-import-mysql-binlog-that-contains-inserts-of-a-timestamp-field-with-defau
https://forums.aws.amazon.com/thread.jspa?threadID=132676
https://dba.stackexchange.com/questions/83125/mysql-any-way-to-import-a-huge-32-gb-sql-dump-faster
https://stackoverflow.com/questions/51395925/mysql-error-max-allowed-packet-bytes-during-import-sql-script-on-database-host
zip yourfile.zip sourcedir/* .*
https://stackoverflow.com/questions/12493206/zip-including-hidden-files
Conclusion:
Dump insert separated line seem very slow on import as many experience have been had.
In some special case, for example huge record (serialized/JSON) import to wp_options exceed limit (default RDS ~4MB ?) then we have to dump INSERT separated to figure out what line/record number caused problem. It still required update RDS max_allowed_packet_size (?) but dump separate have useful here.
Although we can use editor or tool to cut part, line from dump multiple line but it is easier with dump separate on the first try, for example get the line number (with record ID) where dump error.
nohup mysqldump --no-create-info -u db_name -p'password' -h 172.xxx.host --single-transaction --quick --ignore-table=db_name.wp_usermeta db_name > db_name_live_data.sql &
https://stackoverflow.com/questions/11601692/mysql-amazon-rds-error-you-do-not-have-super-priviledges
https://dba.stackexchange.com/questions/17367/how-can-i-monitor-the-progress-of-an-import-of-a-large-sql-file
https://stackoverflow.com/questions/2016894/how-to-split-a-large-text-file-into-smaller-files-with-equal-number-of-lines
mysqldump --net_buffer_length=4096 --create-options --default-character-set="utf8" --host="localhost" --hex-blob --lock-tables --password --quote-names --user="myuser" "mydatabase" "mytable" > mytable.sql
https://stackoverflow.com/questions/5013151/how-do-i-limit-the-number-of-results-returned-from-grep
https://stackoverflow.com/questions/30658703/how-to-print-the-line-number-where-a-string-appears-in-a-file
https://askubuntu.com/questions/1026045/limit-grep-output-to-short-lines
35
I found what the problem was. The MySql variable/parameter explicit_defaults_for_timestamp was OFF on my local machine but ON on my remote machine.
I visited my AWS RDS Parameter Groups page and changed explicit_defaults_for_timestamp from 1 to 0. Then I went to my AWS RDS instances page to watch when "Parameter Group" changed from "Applying" to "pending-reboot". Then I rebooted the particular instance.
These links helped me:
https://stackoverflow.com/a/23392448/470749
How to import MySQL binlog that contains INSERTs of a TIMESTAMP field with default value CURRENT_TIMESTAMP
https://stackoverflow.com/questions/18264942/how-to-import-mysql-binlog-that-contains-inserts-of-a-timestamp-field-with-defau
https://forums.aws.amazon.com/thread.jspa?threadID=132676
https://dba.stackexchange.com/questions/83125/mysql-any-way-to-import-a-huge-32-gb-sql-dump-faster
https://stackoverflow.com/questions/51395925/mysql-error-max-allowed-packet-bytes-during-import-sql-script-on-database-host
zip yourfile.zip sourcedir/* .*
https://stackoverflow.com/questions/12493206/zip-including-hidden-files
Conclusion:
Dump insert separated line seem very slow on import as many experience have been had.
In some special case, for example huge record (serialized/JSON) import to wp_options exceed limit (default RDS ~4MB ?) then we have to dump INSERT separated to figure out what line/record number caused problem. It still required update RDS max_allowed_packet_size (?) but dump separate have useful here.
Although we can use editor or tool to cut part, line from dump multiple line but it is easier with dump separate on the first try, for example get the line number (with record ID) where dump error.
nohup mysqldump --no-create-info -u db_name -p'password' -h 172.xxx.host --single-transaction --quick --ignore-table=db_name.wp_usermeta db_name > db_name_live_data.sql &
https://stackoverflow.com/questions/11601692/mysql-amazon-rds-error-you-do-not-have-super-priviledges
Comments
Post a Comment