How to import data from a DBF file to SQL Server
Data migration must be fast, reliable, and easily automatable. This is especially relevant for data import and export operations, which don't need to take that much time. If you want to set up a recurring import operation (for instance, you need to regularly update a table with data from a file), the most convenient way of doing it is to configure your import settings once, save them to a template,
and simply apply that template automatically whenever you need it (or you can even schedule regular import operations at a certain time).
To get all this done, you need a smart and flexible assistant that would cover a vast number of data formats to choose from. And if a) you're dealing with Microsoft SQL Server and b) you're an avid user of SSMS,
you won't find a better candidate than dbForge Data Pump, a versatile SSMS add-in that helps migrate data between external sources and SQL Server databases.
Data Pump supports multiple data formats such as TXT, XLS, XLSX, MDB, XML, JSON, CSV, ODBC, Google Sheets, and the one we'll be talking about below - DBF.
What's so special about the DBF file format? Introduced in 1983, DBF was originally associated with dBase, one of the earliest database management systems and the most successful one in its day. Quite straightforwardly, a DBF file stores table data in an array comprising multiple fields and records.
The said array begins with a header record that defines the structure of the DBF file.
Here are some of the advantages of using DBF files:
- DBF is compatible with multiple database management systems.
- Due to their inherent simplicity, DBF files can be easily viewed with a variety of applications.
- DBF can be easily converted into other formats such as XLS, CSV, TSV, or JSON.
Now let's see how to import data from a DBF file into a SQL Server database using SSMS and dbForge Data Pump.
Prerequisites
There are two prerequisites to speak of.
1. Make sure that you have a DBF file with data that you want to import into a SQL Server database
2. Download and install dbForge Data Pump on the machine where data import will be performed. Since Data Pump is available for a free 30-day trial, this is a nice opportunity to check it in action and see how well it suits your needs.
Data import from a DBF file
Now open SSMS, connect to the required database, and let's get started.
1. In Object Explorer, right-click the database and select Data Pump > Import Data to open the Data Import Wizard.
2. You will be greeted by the Source file page, where you first need to select the DBF format and load the source file with data using the File name field. Once it's done, click Next.
3. On the Destination page, select a server connection, a database and its schema. Then select whether the data will be imported into a New table (in this case, you need to specify its name) or into an Existing table. Afterwards, click Next.
4. On the Options page, you can configure the formatting options for imported data.
- First, you can check whether the Encoding is set correctly or select another one from the drop-down list.
- Next, you can select the Read deleted rows checkbox, if necessary, to import rows marked as deleted in your source data.
- Finally, you can preview the source data and click Next.
5. On the Data formats page, you have two auxiliary tabs. The first one is Common Formats, where you can specify the formats for null strings, thousand and decimal separators, boolean variables, date and time. There is also a checkbox Autodetect Date and Time format, selected by default.
The second tab is called Column Settings, where you can configure the format settings for separate columns. You have four options here: Null String, Left Quote, Right Quote, and Date and Time. Note that if a format mask is not set, the application will identify date/time values automatically.
6. On the Mapping page, you can map the source columns to the target ones. If you are importing data into a new table, the application will automatically create and map all the columns. You can see the results in the preview section.
Note that If you are importing data into an existing table, only columns with the same names will be mapped,
while the other columns should be mapped manually. If no columns with the same name are found, they are mapped in the following succession:
the 1st column in Source with the 1st column in Target, etc.
You should select at least one column with a primary key by selecting the corresponding Key checkbox. Otherwise, some of the import modes on the Modes page will be unavailable.
You can click View on the toolbar to view column properties. Finally, to clear the mapping of all columns, click Clear Mappings on the toolbar. To restore it, click Fill Mapping.
7. On the Modes page, select an import mode. If you select to import data into an existing table, all the options will be available. If you import data into a new target table, only the Append mode can be used.
- Append - add records to the target table
- Update - update a record in the target table with a matching record from the source table
- Append/Update - update a record if it exists in the target table; otherwise, add a record
- Delete - delete records in the target table that match records in the source table
- Repopulate - delete all records in the target table and repopulate them from the source table
Optionally, you can select the Use a single transaction and Use bulk insert checkboxes, the latter of which reduces the number of statements and speeds up import but can affect the error handling mechanism.
8. On the Output page, select one of the three available output options.
- Open the data import script in the internal editor
- Save the data import script to a file with a specified path and name; additionally, you can select the Add timestamp to the file name and Open in the internal editor checkboxes
- Import data directly to the database
9. On the Errors handling page, you can specify the error processing behavior using one of the three available options.
- Prompt the user for an action
- Ignore all errors
- Abort at the first error
In addition, you can specify a path to write reports to a log file.
10. Click Import. When the data import process is completed, you can view the log file (if you opted to create one), click Import more to perform a new import operation, or click Finish to exit the wizard.
Tips
In order to make the most of your import, we'd like to suggest the following tips:
- Before importing a DBF file, make sure it does not contain inconsistencies or errors.
- If your target table has indexes, you may consider removing them before the import and recreating them afterwards. This will speed up the import process.
- Use appropriate transaction management settings. In some cases, removing or reducing transaction logging during the import may improve its overall speed.
- If you have to deal with large datasets, consider using bulk loading techniques such as BULK INSERT or BCP (Bulk Copy Program) utility for faster data transfer.
-
If you want to save your import settings as templates for recurring use, click Save Template in the upper left corner of the wizard. You can do it at any moment when configuring your settings.
Conclusion
We hope that our tutorial will be useful for your future endeavors. The overall flow is generally applicable for all 10 data formats available for import in Data Pump; however, the number of pages and options may vary for different formats.
Still, it's as flexible as import and export can get, and thus we gladly invite you to download Data Pump for a free 30-day trial.
Note that Data Pump comes as part of a bundle called dbForge SQL Tools,
which contains 15 standalone apps and SSMS add-ins designed to empower you with the best tools for SQL Server development, management, and administration.