FastTransfer is a command-line interface tool designed for efficient data transfer between various database systems. It offers a wide range of options to customize the data transfer process to suit different requirements and environments.
Source | Windows AMD64 | Linux AMD64 | Linux ARM64 |
---|---|---|---|
ClickHouse | ✅ | ✅ | ✅ |
DuckDB | ✅ | ✅ | ✅ |
MySQL | ✅ | ✅ | ✅ |
Netezza | ✅ | ✅ | ✅ |
ODBC | ✅ | ✅ | ✅ |
OLEDB | ✅ | ❌ | ❌ |
Oracle | ✅ | ✅ | ✅ |
PostgreSQL | ✅ | ✅ | ✅ |
SQL Server | ✅ | ✅ | ✅ |
SAP Hana | ✅ | ✅ | ❌ |
Teradata | ✅ | ✅ | ❌ |
Target | Windows AMD64 | Linux AMD64 | Linux ARM64 |
---|---|---|---|
ClickHouse | ✅ | ✅ | ✅ |
DuckDB | ✅ | ✅ | ✅ |
MySQL | ✅ | ✅ | ✅ |
Netezza | ✅ | ✅ | ✅ |
ODBC | ❌ | ❌ | ❌ |
OLEDB | ❌ | ❌ | ❌ |
Oracle | ✅ | ✅ | ✅ |
PostgreSQL | ✅ | ✅ | ✅ |
SQL Server | ✅ | ✅ | ✅ |
SAP Hana | ✅ | ✅ | |
Teradata | ✅ | ✅ |
OS | Versions | Architectures |
---|---|---|
Alpine | 3.21, 3.20, 3.19, 3.18 | Arm64, x64 |
Azure Linux | 3.0 | Arm64, x64 |
CentOS Stream | 10, 9 | Arm64, x64 |
Debian | 12 | Arm64, x64 |
Fedora | 41, 40 | Arm64, x64 |
openSUSE Leap | 15.6 | Arm64, x64 |
Red Hat Enterprise Linux | 10, 9, 8 | Arm64, x64 |
SUSE Enterprise Linux | 15.6 | Arm64, x64 |
Ubuntu | 24.10, 24.04, 22.04, 20.04 | Arm64, x64 |
Here is a page that documents how to check the certificate on Linux : check linux certificate.
OS | Versions | Architectures | |
---|---|---|---|
Nano Server | 2025, 2022, 2019 | x64 | |
Windows | 11 24H2 (IoT), 11 24H2 (E), 11 24H2, 11 23H2, 11 22H2 (E), 10 22H2, 10 21H2 (E), 10 21H2 (IoT), 10 1809 (E), 10 1607 (E) | x64 | |
Windows Server | 2025, 23H2, 2022, 2019, 2016, 2012-R2, 2012 | x64 | |
Windows Server Core | 2025, 2022, 2019, 2016, 2012-R2, 2012 | x64 |
FastTransferCommand:
FastTransferOptions:
SourceConnectionType:
SourceConnectionParameters:
SourceInfos:
SourceConnectionType:
TargetConnectionParameters:
TargetInfos:
ParallelParameters:
MappingParameters:
LogParameters:
LicenseParameters:
WARNING : WE STRONG ADVISE TO USE LONG PARAMETERS
-?
, --help
Show help information.
-c
, --sourceconnectiontype <type>
Source Connection Type. Allowed Values:
clickhouse
for ClickHouseduckdb
for duckdbduckdbstream
for duckdb using streaming (more memory efficient))hana
for SAP HANA (.net driver)msoledbsql
SQL Server OleDBmssql
SQL Server Native Client (.Net)mysql
MySQLnzcopy
Netezza copy (WIP)nzoledb
Netzza OleDBnzsql
Netezza Native (.net driver)odbc
ODBC datasource (DSN must be configured)oledb
Generic OleDB datasourceoraodp
Oracle ODP.Netpgcopy
PostgreSQL Copypgsql
PostgreSQL Native (.Net)teradata
Teradata (.Net)-g
, --sourceconnectstring <connectionstring>
Source Connection String. (override all other source connection parameters).
-n
, --sourcedsn <dsn>
ODBC DSN. (only if odbc source type is used). Drivers must be already installed on the machine and DSN must be configured.
-p
, --sourceprovider <provider>
OleDB provider (e.g., MSOLEDBSQL
for MSSQL or NZOLEDB
for Netezza…).
Only if oledb source type is used. OleDB provider must be already installed on the machine.
-i
, --sourceserver <server>
-u
, --sourceuser <user>
Source user.
-x
, --sourcepassword <password>
Source user’s password.
-a
, --sourcetrusted
Switch to use trusted authentication on source.
-d
, --sourcedatabase <database>
Source database.
-s
, --sourceschema <schema>
Source schema. (must be set if pgsql Ctid method is used)
-t
, --sourcetable <table>
Source table. (must be set if pgsql Ctid method is used)
-q
, --query <query>
Plain text SQL query. Will be used instead of source table if provided
-f
, --fileinput <file>
Input file storing SQL query. The file must exist.
-C
, --targetconnectiontype <type>
Target Connection Type.
Allowed Values:
clickhousebulk
for ClickHouse BulkCopyduckdb
for DuckDBhanabulk
for SAP HANAmsbulk
for SQL Server BulkCopymysqlbulk
for MySQL BulkCopynzbulk
for Netezza BulkCopyorabulk
for Oracle BulkCopyoradirect
for Oracle Directpgcopy
for postgresql using copy (Binary format for postgresql sources and Text for others)pgsql
for PostgreSQL Native (.Net)teradata
for Teradata (.Net)-I
, --targetserver <server>
-U
, --targetuser <user>
Target user.
-X
, --targetpassword <password>
Target user’s password.
-A
, --targettrusted
Switch to use trusted authentication on target.
-D
, --targetdatabase <database>
Target database.
-S
, --targetschema <schema>
Target schema.
-T
, --targettable <table>
Target table.
-M
, --method <method>
Method for parallelism (if needed).
Allowed Values:
DataDriven
Use the distinct value of the distributeKeyColumn (which can be a column or an expression) to distribute dataCtid
Recommanded and exclusive for postgreSQL and postgreSQL compatible. Use the Ctid pseudo column (for pgsql and pgcopy source only)Random
Use a modulo on the distributeKeyColumn to distribute dataRowid
For Oracle sources only : use rowid slicesRangeId
Use a numeric range to distribute data (useful with an identity column or sequence without gaps. The column must be numerical)Ntile
Use the ntile function to distribute data evenly data can be numerical, date, datetime or stringNZDataSlice
: Netezza source only. Use the data slices to distribute data retrievalNone
No parallelismDefault Value: None
. If you want to use a parallel export/import use other than None. Try to use Parallelism only if you have a large amount of data to transfer (more than 1M cells).
-K
, --distributeKeyColumn <column>
Column to be used to distribute data Not needed if Ctid method is used.
-P
, --degree <degree>
Degree of Parallelism
0
for Auto0 > n < 1024
for fixed degree.n < 0
negative values will be used to adapt the degree of parallelism to the number of available CPUs.
eg : -2 will use half the cpus on the machine where FastTransfer is launchedNota : whatever the degree, if the method is None
, the extraction will remain serial
Default Value: -2
.
Q
, --datadrivenquery
Override query to be used to get the values list for the DataDriven
method. You can avoid select distinct of the distributeKeyColumn on large table, if you have de reference table that contains all the values.
-L
, --loadmode <mode>
Append
append data to the target tableTruncate
truncate the target table before loadingDefault Value: Append
.
-B
, --batchsize <size>
Batch Size for BulkCopy. Default Value: 1048576
.
-W
, --useworktables
Swith that will activate the usage of intermediate work tables. Useful in some rare cases
-N
, --mapmethod
Position
: FastTransfer will map the columns by their position in the source and target tablesName
: FastTransfer will map the columns by their name in the source and target tables (case insensitive) and will ignore missing columns (from source or target)Default Value: Position
.
-R
, --runid <RunSpanID>
Run ID. coming from the caller. It will be used to allow tracing of the process. Default is a random Guid.
O
,--settingsfile
Custom Settings file for logging and other settings. Default is FastTransfer__settings.json
in the same folder as the executable.
--license
FastTransfer.lic
in the same folder as the executable. You can provide another filepath or an url to get the license informations.\FastTransfer.exe `
--sourceconnectiontype "mssql" `
--sourceserver "localhost" `
--sourceuser "fastuser" `
--sourcepassword "fastpassword" `
--sourcedatabase "AdventureWorks2017" `
--sourceschema "Person" `
--sourcetable "Person" `
--targetconnectiontype "pgcopy" `
--targetserver "localhost" `
--targetuser "fastuser" `
--targetpassword "fastpassword" `
--targetdatabase "fastdb" `
--targetschema "public" `
--targettable "Person" `
--method "RangeId" `
--distributeKeyColumn "PersonID" `
--loadmode "Truncate" `
--degree -2 ` #Automatically adapt the degree of parallelism to 1/2 of cpu available
--runid "mssql-to-pgcopy-123456"
.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourceuser "fastuser" `
--sourcepassword "fastpassword" `
--sourcedatabase "fastdb" `
--sourceschema "Public" `
--sourcetable "Person" `
--targetconnectiontype "msbulk" `
--targetserver "localhost" `
--targetuser "fastuser" `
--targetpassword "fastpassword" `
--targetdatabase "AdventureWorks2017" `
--targetschema "Person" `
--targettable "Person" `
--method "Ctid" ` # Ctid is for pgsql and pgcopy source only
--loadmode "Truncate" `
--degree -2 ` #Automatically adapt the degree of parallelism to 1/2 of cpu available
--runid "pgsql-to-msbulk-123456" `
--mapmethod "Name"
for more examples see examples
Download the latest version from the link provided by Arpe.io and extract the files to a directory on your machine.
For Linux user run a chmod +x FastTransfer
command.
For the trial, that’s it! You are ready to use FastTransfer.
For other edition than trial you will need a valid license. By default FastTransfer will try to find a FastTransfer.lic file in the same directory. You can also provide another path or an url in your organisation where you store/share the license file by using the --license
parameter.
FastTransfer uses a settings file to configure the logging.
The default file is FastTransfer_settings.json
in the same folder as the executable. You can specify a custom file using the -O
option.
You can download the FastTransfer_settings.json template to get started. The default file is already configured to use the SQL Server database for logging. You can also use the console and file sinks for logging.
You can start from the default file and modify it to suit your needs.
{
"ConnectionStrings": {
"MS_FastTransferLogs": "Server=localhost;Database=FastTransferLogs;Integrated Security=SSPI;Encrypt=True;TrustServerCertificate=True"
},
"Serilog": {
"Using": [
"Serilog.Sinks.Console",
"Serilog.Sinks.File",
"Serilog.Sinks.MSSqlServer",
"Serilog.Enrichers.Environment",
"Serilog.Enrichers.Thread",
"Serilog.Enrichers.Process",
"Serilog.Enrichers.Context"
],
"MinimumLevel": "Debug",
FastTransfer is also available using wrappers. One fully fonctionnal and supported wrapper is available for TSQL using a CLR procedure that avoid using xp_cmdshell or for PostgreSQL. You can call FastTransfer throught the database (you need to copy the FastTransfer on the host where the instance/cluster reside)
Commercial License. You can buy FastTransfer online or contact us at sales@arpe.io for more information.