We tested the migration speed of the most popular conversion and synchronization directions among our clients. The results you can find below.
NOTE: all tests we carried out on the local machines because the migration speed of remote databases mainly depends on the servers’ occupancy and bandwidth.
The tests were performed on tables with 10 columns: 4 fields with Numeric data, 2 Date/Time fields, and 4 other Character/String fields.
The migration speed is influenced by many factors:
- if the two DB servers are in the same machine, same network, same data center, or are separated.
- whether a computer is powerful, the network is busy, whether there are any firewalls and how quickly the servers respond to.
- database structure complexity, for example, if BLOBS or Images exist, the process will be longer.
All tests were held on the databases with 1 million records.
Only in 1 direction - from Oracle to MySQL, we also migrated a database with 10 million records. This testing is an illustrative example of how the conversion and synchronization speed increases depending on the number of entries.
Comments
4 comments
I have a table with 1,416,719 records. Going from MSSQL to MySQL. Seems to be taking a full day... perhaps 8+ hours to convert just one such table. Not sure how to make it speed up. Perhaps it is because the servers are both remote? Perhaps it would be faster if the DBConvert was installed on the same server as the MSSQL DB?
Hi Brian,
We held our tests on the local servers. As we wrote before, conversion speed of remote databases mainly depends on the servers’ occupancy and bandwidth.
Also, for the testing we used not complex databases' structures, the structures without Images and Binary data.
To increase conversion/synchronization speed and for the proper process you must have NOT a composite unique or primary index.
Did you try to use 'Quantization' option?
Quantization for MS SQL serves to avoid memory issues. The more complicated and larger query generated the more time required to process that query by the server.
Please find more info about this option here:
https://support.dbconvert.com/hc/en-us/articles/201185582-How-to-configure-DBConvert-DBSync-software-for-processing-large-tables-efficiently-
https://support.dbconvert.com/hc/en-us/articles/201207032-Bulk-Insert-and-Quantization-options-
In the documentation I read that you must have a primary key or it will not convert or sync properly. You seem to be suggestion there should be no primary key? I did check bulk insert as well as quantization, and it seems to be moving faster. I am not sure that I understand what the quantization slider controls. I have it currently set to 100.
Thank you for your help.
Hi Brian,
Primary key or Unique index must be, but it should be not a composite.
If 'Quantization' is checked the synchronization speed will be increased for large databases. This is achieved by splitting the whole record set into portions.
We advise you to set the slider value to 100 or lower than 100 until the error disappears.
Please sign in to leave a comment.