Question :
I’m just testing my PowerShellScript to do bulkcopy of multiple tables between sql-server and oracle Databases.
When the destination is Oracle, than I can monitor the progress by by executing
select count(*) from Mytable
the single batchsize chunkss individual committed.
Using a sql-server destination, there seems to be a single transaction. Which was rolledback due to network error at first trial. The second is still in progress. Will the use of internal transaction commit the individual chunks?
The use of bulkcopy seems a bit of an all or nothing, if it fails you can restart from the beginning.
$source = 'ora'
$dest = 'sql'
$SourceConnectionString_sql = "Data Source=localhost;Initial Catalog=MySQLDatabase;Integrated Security=True"
$SourceConnectionString_ora = "Data Source=MyOracleDatabase;User ID=MyUser;Password=MyPassword"
$DestinationConnectionString_sql = "Data Source=localhost;Initial Catalog=MySqleDatabase2;Integrated Security=True"
$DestinationConnectionString_ora = "Data Source=MyOracleDatabase2;User ID=MyUser;Password=MyPassword"
# -----------------------------------------------------------------------------
$SQL_Select_TABLES = "SELECT name FROM sys.Tables ORDER BY 1"
$SQL_Select_TABLES = "SELECT Top 1 name FROM sys.Tables"
$ORA_Select_TABLES = "SELECT TABLE_NAME Name FROM user_tables ORDER BY 1"
$ORA_Select_TABLES = "SELECT TABLE_NAME Name FROM user_tables where ROWNUM = 1"
$notifyAfter = 1000
# -----------------------------------------------------------------------------
if ($ora_dll -eq $null)
{
$ora_dll = [System.Reflection.Assembly]::LoadWithPartialName("Oracle.DataAccess")
}
$global:start = get-date
$notifyFunction={
$dauer = [int]((get-date) - $global:start).TotalMilliseconds / 1000
Write-Host "$tableName $($args[1].RowsCopied) $dauer"
}
if ($dest -eq 'sql')
{
$bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $DestinationConnectionString_sql
}
else
{
$bulkCopy = new-object ("Oracle.DataAccess.Client.OracleBulkCopy") $DestinationConnectionString_ora
}
$bulkCopy.BatchSize = 50000
$bulkCopy.BulkCopyTimeout=10000000
if ($notifyAfter -gt 0){
$bulkCopy.NotifyAfter=$notifyafter
if ($dest -eq 'sql')
{
$bulkCopy.Add_SQlRowscopied($notifyFunction)
}
else
{
$bulkCopy.Add_OracleRowscopied($notifyFunction)
}
}
if ($source -eq 'sql')
{
$sourceConnection = New-Object System.Data.SqlClient.SQLConnection($SourceConnectionString_sql)
$sourceConnection.open()
$commandSourceData = New-Object system.Data.SqlClient.SqlCommand($SQL_Select_TABLES, $sourceConnection)
$da = New-Object system.Data.SqlClient.SqlDataAdapter($commandSourceData)
$dt = New-Object system.Data.datatable
[void]$da.fill($dt)
}
else
{
$sourceConnection = New-Object Oracle.DataAccess.Client.OracleConnection($SourceConnectionString_ora)
$sourceConnection.open()
$commandSourceData = New-Object Oracle.DataAccess.Client.OracleCommand($Ora_Select_TABLES, $sourceConnection)
$da = New-Object Oracle.DataAccess.Client.OracleDataAdapter($commandSourceData)
$dt = New-Object system.Data.datatable
[void]$da.fill($dt)
}
foreach ($table in $dt)
{
$tablename = $table.name
$Tablename
try
{
$sql = "SELECT * FROM $tableName"
if ($source -eq 'sql')
{
$commandSourceData = New-Object system.Data.SqlClient.SqlCommand($sql,$sourceConnection)
}
else
{
$commandSourceData = New-Object Oracle.DataAccess.Client.oracleCommand($sql,$sourceConnection)
}
$reader = $commandSourceData.ExecuteReader()
$bulkCopy.DestinationTableName = $tableName
$bulkCopy.WriteToServer($reader)
}
catch
{
$ex = $_.Exception
Write-Host "Write-DataTable$($connectionName):$ex.Message"
}
finally
{
$reader.close()
}
}
$sourceConnection.close()
$bulkCopy.close()
((get-date) - $start).TotalSeconds
Answer :
If I understand you correctly:
- You’d use BatchSize to break the load into chunks eg 10k rows
- UseInternalTransaction determines if each batch is a transaction
So:
- BatchSize = 0 -> UseInternalTransaction is irrelevant
- BatchSize > 0 -> Apply UseInternalTransaction is specified
FYI, to monitor a SQL Server load in a non-blocking fashion
select count(*) from Mytable WITH (nolock)