Import a large sql dump file to a MySQL database from command line

Today I had to import a very large SQL dump file (6 Gb) to a MySQL database using windows command line. If you are using linux it is the same. The process is the following:

  1. Open a command prompt (or shell in Linux) with administrative privilleges
  2. If you are in Windows set character set to unicode. Linux is using UTF-8 by default.
    chcp 65001
  3. Connect to a mysql instance using command line
    $PATH_TO_MYSQL\mysql.exe -h 192.168.1.1 --port=3306 -u root -p

    if you are in localhost you do not need host and port

    $PATH_TO_MYSQL\mysql.exe -u root -p
  4. You are now in mysql shell. Set network buffer length to a large byte number. The default value may throw errors for such large data files
    set global net_buffer_length=1000000;
  5. Set maximum allowed packet size to a large byte number.The default value may throw errors for such large data files.
    set global max_allowed_packet=1000000000;
  6. Disable foreign key checking to avoid delays,errors and unwanted behaviour
    SET foreign_key_checks = 0;
    SET UNIQUE_CHECKS = 0;
    SET AUTOCOMMIT = 0;
    
  7. Import your sql dump file
    source C:\bob_db\dbdump150113.sql

You are done! Remember to enable foreign key checks when procedure is complete!

SET foreign_key_checks = 1;
SET UNIQUE_CHECKS = 1;
SET AUTOCOMMIT = 1;

If you are in Linux you can create a Bash script which will do the dirty job and write to stdout start and end time of import:

#!/bin/sh 

# store start date to a variable
imeron=`date`

echo "Import started: OK"
dumpfile="/home/bob/bobiras.sql"

ddl="set names utf8; "
ddl="$ddl set global net_buffer_length=1000000;"
ddl="$ddl set global max_allowed_packet=1000000000; "
ddl="$ddl SET foreign_key_checks = 0; "
ddl="$ddl SET UNIQUE_CHECKS = 0; "
ddl="$ddl SET AUTOCOMMIT = 0; "
# if your dump file does not create a database, select one
ddl="$ddl USE jetdb; "
ddl="$ddl source $dumpfile; "
ddl="$ddl SET foreign_key_checks = 1; "
ddl="$ddl SET UNIQUE_CHECKS = 1; "
ddl="$ddl SET AUTOCOMMIT = 1; "
ddl="$ddl COMMIT ; "

echo "Import started: OK"

time mysql -h 127.0.0.1 -u root -proot -e "$ddl"

# store end date to a variable
imeron2=`date`

echo "Start import:$imeron"
echo "End import:$imeron2"
Advertisements

About cmanios

programming
This entry was posted in Databases, mysql and tagged , , , , , , , , , , , , . Bookmark the permalink.

33 Responses to Import a large sql dump file to a MySQL database from command line

  1. Marinos says:

    I have read many articles for importing large dbs in mysql…u made my day..THNX!!!

  2. jeff says:

    Thanks for the article – maybe you can help me out. I’ve taken the steps above and I’m getting an error 2006 (MySQL server has gone away) when trying to import a large file. Any ideas why this might be?

  3. Didier says:

    Thank you very much, your post saved my day. I was having errors when importing large files, your instructions worked like a charm.

  4. I have followed your recommendations, but on Set max_allowed_packet and net_buffer_length I got 1 warning in the query and I don’t know what is it. However, the performance is not increased.

  5. nice guid. thanks, but you had forgotten ‘use databansename;’.

  6. Ken Show says:

    This is simply incredible! Many thanks!

  7. kamalsaleh813 says:

    This worked perfectly for me .. Thank you Very Much!

  8. Stefan says:

    Thanks, it helped me a lot.

  9. mimma says:

    Didn’t work with me . “No database selected”

  10. mimma says:

    oops , sorry ma bad
    everything is fine ^_^
    thanks

  11. ann b says:

    This is so helpful. Reduced my import time from ~1 hour to 4 minutes. Thank you very much.

  12. Azure says:

    Thank for you article. But how long does it take to import 6GB db? I looking for a solution to import very big db, around 150 million record everday to mysql. Time is very important.

    • cmanios says:

      @Azure: In my situation it took about from 45 minutes to 2 hours to import a 6GB file. This file contained tables with 20-30 million rows with many indexes. Although I am not a MySQL expert, I noticed that you can achieve better performance if you tune your memory settings (like tmp_table_size,max_heap_table_size,max_binlog_size) in my.cnf file.

  13. freij. says:

    Hi there, thank you for this article. It was really helpful for me.

  14. Julius says:

    Thanks for putting this together as a script. Does anyone have an idea how I could combine this with dump files that are compressed? I used to import these databases like this:

    gunzip < "dbdump.sql.gz" | mysql -u xyz -p'..' database

    For small dumps, extracting them before is fine, for large ones this seems somewhat problematic as it requires a lot of spare disk space.

  15. mortona42 says:

    imeron? Timer On?

    • cmanios says:

      Hello mortona42! imeron and imeron2 are just two variables to record start and end datetime of import process. See the updated script.

  16. JustME says:

    Dont forget to select the database before you use your dump.

    USE databasenamehere;

  17. nikunj ghelani says:

    ghanu saras che aa post

  18. kudoh says:

    WORKED ! , Thanks DUDE !!

  19. Ann B says:

    The script you provided works great on mysql 5.5. Thank you. However I recently received a newer PC at work and upgraded the local mysql server from 5.5.37 to 5.7.13-0ubuntu0.16.04.2. The import process duration increased from 5 minutes to 25 minutes. The import file is only 110MB. I see in this post (http://stackoverflow.com/questions/37530126/mysql-5-5-faster-on-database-import-and-drop-than-5-7) that 5.7 is slower. I’ve tried changing several performance tuning settings but with no luck.

  20. Lokesh says:

    Not working in Windows 10 64bit. Works perfectly on Windows 7. I am using Wamp Server on both the machines.
    Windows 7 takes 30 min to import the database and Window 10 takes 1.5 hours for the same database.

    • cmanios says:

      Hello Lokesh!

      You might have to check MySql configuration in my.ini as WAMP Server propably comes with a predefined and preconfigured MySQL installation which might not be optimised for high loads. Some times it may help to tweek innodb_buffer_pool_size parameter.

      Best regards,
      Christos

  21. Kim says:

    I always use like this :-), that work!
    mysql -u[username] -p[password] database_name < file.sql

  22. Varun says:

    Hi, Can we use this above method for INNODB as well? I was trying to disable indexing. Not sure if setting foreign key check to 0 would be any helpful. Could you please help me out? Sorry, i am a novice in database stuff. Thanks

    • cmanios says:

      Hi Varun!

      All these examples have been used in production databases which use InnoDB as storage engine. MyISAM engine does not have foreign key checks , as far as I am concerned.

  23. Crescenzo says:

    Sorry,
    but where i put this script? Thanks

    • cmanios says:

      Hello Crescenzo!

      This is a Linux bash shell script. You can execute it wherever you like in a Linux OS, providing that you have MySQL client installed and have the correct credemtials and permissions to a MySQL database.

      Best regards,
      Christos

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s