Using TransactionScope multiple times

Ignore anything about object structure or responsibilities for persistence. This is an example to help me understand how I should be doing things. Partly because it seems not to work when I try and replace oracle with SqlLite as the db provider factory, and I’m wondering where I should spend time investigating.

Let’s begin with an example;

public class ThingPart
{
    private DbProviderFactory connectionFactory;

    public void SavePart()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SavePartA();
            SavePartB();
            ts.Complete(); 
        }
    }

    private void SavePartA()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }

    private void SavePartB()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

And something which represents the Thing:

public class Thing
{
    private DbProviderFactory connectionFactory;

    public void SaveThing()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SaveHeader();
            foreach (ThingPart part in parts)
            {
                part.SavePart();
            }  
            ts.Complete();    
        }
    }

    private void SaveHeader()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

I also have something which manages many things.

public class ThingManager
{    
    public void SaveThings
    {        
        using (TransactionScope ts = new TransactionScope)
        {            
            foreach (Thing thing in things)
            {
                thing.SaveThing();
            }            
        }        
    }    
}

Its my understanding that:

The connections will not be new and will be reused from the pool each time (assuming DbProvider supports connection pooling and it is enabled)

This depends – e.g. the SAME connection will be reused for successive steps in your aggregate transaction if all connections are to the same DB, with the same credentials, and if SQL is able to use the Lightweight transaction manager (SQL 2005 and later). (but SQL Connection pooling still works if that was what you were asking?)


The transactions will be such that if I just called ThingPart.SavePart (from outside the context of any other class) then part A and B would either both be saved or neither would be.

Atomic SavePart – yes, this will work ACID as expected.


If I call Thing.Save (from outside the context of any other class) then the Header and all the parts will be all saved or non will be, ie everything will happen in the same transaction

Yes nesting TransactionScopes with the same scope will also be atomic. Transaction will only commit when the outermost TS is Completed.


If I call ThingManager.SaveThings then all my things will be saved or none will be, ie everything will happen in the same transaction.

Yes , also atomic, but note that you will be escalating SQL locks. If it makes sense to commit each Thing (and its ThingParts) individually, this would be preferable from a SQL concurrency point of view.


If I change the DbProviderFactory implementation that is used, it shouldn’t make a difference.

The Provider will need to be compatable as a TransactionScope resource manager (and probably DTC Compliant as well). e.g. don’t move your database to Rocket U2 and expect TransactionScopes to work.

Just one gotcha – new TransactionScope() defaults to isolation level READ_SERIALIZABLE – this is often over pessimistic for most scenarios – READ COMMITTED is usually more applicable.

Reference

https://stackoverflow.com/questions/7553971/using-transactionscope-multiple-times

Search all tables, find primary keys with id, identity and auto-increment in SQL Server

The script below will list all the primary keys, that have at least one int or bigint in their columns with all other ask. 

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 


SELECT OBJECT_SCHEMA_NAME(p.object_id) AS [Schema]
    , OBJECT_NAME(p.object_id) AS [Table]
    , i.name AS [Index]
    , p.partition_number
    , p.rows AS [Row Count]
    , i.type_desc AS [Index Type]
    ,K.increment_value as IncrementValue
    ,K.last_value as LastValue
    ,K.seed_value as SeedValue
    ,k.is_nullable
    ,k.is_identity
    ,k.is_filestream
    ,k.is_replicated
    ,k.is_not_for_replication
FROM sys.partitions p

INNER JOIN sys.indexes i 
        ON p.object_id = i.object_id
       AND p.index_id = i.index_id


INNER JOIN SYS.TABLES S 
         ON S.object_id = P.object_id

LEFT OUTER JOIN sys.identity_columns K
             ON P.object_id = K.object_id

WHERE 1=1

  AND EXISTS ( SELECT 1 
                    FROM SYS.COLUMNS C
              INNER JOIN sys.types AS t 
                         ON c.user_type_id=t.user_type_id
                   WHERE i.object_id = c.object_id
                   AND T.user_type_id IN (127,56)  -- ONLY BIGINT AND INT
             )

  AND I.is_primary_key = 1

  -- AND i.index_id < 2  -- GET ONLY THE CLUSTERED INDEXES - IF EXISTS ANY
                      -- get heaps too

  --AND k.is_identity = 1 -- GET ONLY THE IDENTITY COLUMNS


ORDER BY [Schema], [Table], [Index]

Reference

https://dba.stackexchange.com/questions/165266/search-all-table-find-primarykeys-with-id-int-bigint-and-enable-identity-aut

Is it possible to add index on temp tables ?

The @tableName syntax is a table variable. They are rather limited. The syntax is described in the documentation for DECLARE @local_variable. You can kind of have indexes on table variables, but only indirectly by specifying PRIMARY KEY and UNIQUE constraints on columns. So, if your data in the columns that you need an index on happens to be unique, you can do this. See this answer. This may be “enough” for many use cases, but only for small numbers of rows. If you don’t have indexes on your table variable, the optimizer will generally treat table variables as if they contain one row (regardless of how many rows there actually are) which can result in terrible query plans if you have hundreds or thousands of rows in them instead.

The #tableName syntax is a locally-scoped temporary table. You can create these either using SELECT…INTO #tableName or CREATE TABLE #tableName syntax. The scope of these tables is a little bit more complex than that of variables. If you have CREATE TABLE #tableName in a stored procedure, all references to #tableName in that stored procedure will refer to that table. If you simply reference #tableName in the stored procedure (without creating it), it will look into the caller’s scope. So you can create #tableName in one procedure, call another procedure, and in that other procedure read/update #tableName. However, once the procedure that created #tableName runs to completion, that table will be automatically unreferenced and cleaned up by SQL Server. So, there is no reason to manually clean up these tables unless if you have a procedure which is meant to loop/run indefinitely or for long periods of time.

You can define complex indexes on temporary tables, just as if they are permanent tables, for the most part. So if you need to index columns but have duplicate values which prevents you from using UNIQUE, this is the way to go. You do not even have to worry about name collisions on indexes. If you run something like CREATE INDEX my_index ON #tableName(MyColumn) in multiple sessions which have each created their own table called #tableName, SQL Server will do some magic so that the reuse of the global-looking identifier my_index does not explode.

Additionally, temporary tables will automatically build statistics, etc., like normal tables. The query optimizer will recognize that temporary tables can have more than just 1 row in them, which can in itself result in great performance gains over table variables. Of course, this also is a tiny amount of overhead. Though this overhead is likely worth it and not noticeable if your query’s runtime is longer than one second.

for example, you can create the PRIMARY KEY on a temp table.

IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
 DROP TABLE #tempTable

CREATE TABLE #tempTable 
(
   Id INT PRIMARY KEY
  ,Value NVARCHAR(128)
)

INSERT INTO #tempTable
VALUES 
     (1, 'first value')
    ,(3, 'second value')
    -- will cause Violation of PRIMARY KEY constraint 'PK__#tempTab__3214EC071AE8C88D'. Cannot insert duplicate key in object 'dbo.#tempTable'. The duplicate key value is (1).
    --,(1, 'first value one more time')


SELECT  * FROM #tempTable

Reference

https://stackoverflow.com/questions/6385243/is-it-possible-to-add-index-to-a-temp-table-and-whats-the-difference-between-c

DB Schema and Permissions

If application doesn’t run and complains about permissions e.g. Execute permission, we need to check schema permissions; One of the reason, this happens when we restore database.

Suppose our user name is FMUser and dataase name is FMStoreDev. Under Login Properties of FM User – Security;

UNDER DATABASE -> USER -> SECURITY -> FMUser, CHECK THIS (in case SPROC does not run);

On Securables tab, click Search;

Select first option and click ok;

Click on object types and select schema;

Click on Browse and select your schema;

Select FM schema and grant execute permission to FMUser;

Since this is schema bound, go to securityàschema. Schema name should be your custom schema and owner should be dbo;

Click on Permissions tab. Make sure execute permission is selected for FM user;

These changes will help to solve database execute permission problem.

This is an alternative of remapping the users (to restore permission) after restoring DB;

USE FMStoreDev;  
GO  
EXEC sp_change_users_login 'Update_One', 'FMUser', 'FMUser';  
GO