Search all tables, find primary keys with id, identity and auto-increment in SQL Server

The script below will list all the primary keys, that have at least one int or bigint in their columns with all other ask. 

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 


SELECT OBJECT_SCHEMA_NAME(p.object_id) AS [Schema]
    , OBJECT_NAME(p.object_id) AS [Table]
    , i.name AS [Index]
    , p.partition_number
    , p.rows AS [Row Count]
    , i.type_desc AS [Index Type]
    ,K.increment_value as IncrementValue
    ,K.last_value as LastValue
    ,K.seed_value as SeedValue
    ,k.is_nullable
    ,k.is_identity
    ,k.is_filestream
    ,k.is_replicated
    ,k.is_not_for_replication
FROM sys.partitions p

INNER JOIN sys.indexes i 
        ON p.object_id = i.object_id
       AND p.index_id = i.index_id


INNER JOIN SYS.TABLES S 
         ON S.object_id = P.object_id

LEFT OUTER JOIN sys.identity_columns K
             ON P.object_id = K.object_id

WHERE 1=1

  AND EXISTS ( SELECT 1 
                    FROM SYS.COLUMNS C
              INNER JOIN sys.types AS t 
                         ON c.user_type_id=t.user_type_id
                   WHERE i.object_id = c.object_id
                   AND T.user_type_id IN (127,56)  -- ONLY BIGINT AND INT
             )

  AND I.is_primary_key = 1

  -- AND i.index_id < 2  -- GET ONLY THE CLUSTERED INDEXES - IF EXISTS ANY
                      -- get heaps too

  --AND k.is_identity = 1 -- GET ONLY THE IDENTITY COLUMNS


ORDER BY [Schema], [Table], [Index]

Reference

https://dba.stackexchange.com/questions/165266/search-all-table-find-primarykeys-with-id-int-bigint-and-enable-identity-aut

Is it possible to add index on temp tables ?

The @tableName syntax is a table variable. They are rather limited. The syntax is described in the documentation for DECLARE @local_variable. You can kind of have indexes on table variables, but only indirectly by specifying PRIMARY KEY and UNIQUE constraints on columns. So, if your data in the columns that you need an index on happens to be unique, you can do this. See this answer. This may be “enough” for many use cases, but only for small numbers of rows. If you don’t have indexes on your table variable, the optimizer will generally treat table variables as if they contain one row (regardless of how many rows there actually are) which can result in terrible query plans if you have hundreds or thousands of rows in them instead.

The #tableName syntax is a locally-scoped temporary table. You can create these either using SELECT…INTO #tableName or CREATE TABLE #tableName syntax. The scope of these tables is a little bit more complex than that of variables. If you have CREATE TABLE #tableName in a stored procedure, all references to #tableName in that stored procedure will refer to that table. If you simply reference #tableName in the stored procedure (without creating it), it will look into the caller’s scope. So you can create #tableName in one procedure, call another procedure, and in that other procedure read/update #tableName. However, once the procedure that created #tableName runs to completion, that table will be automatically unreferenced and cleaned up by SQL Server. So, there is no reason to manually clean up these tables unless if you have a procedure which is meant to loop/run indefinitely or for long periods of time.

You can define complex indexes on temporary tables, just as if they are permanent tables, for the most part. So if you need to index columns but have duplicate values which prevents you from using UNIQUE, this is the way to go. You do not even have to worry about name collisions on indexes. If you run something like CREATE INDEX my_index ON #tableName(MyColumn) in multiple sessions which have each created their own table called #tableName, SQL Server will do some magic so that the reuse of the global-looking identifier my_index does not explode.

Additionally, temporary tables will automatically build statistics, etc., like normal tables. The query optimizer will recognize that temporary tables can have more than just 1 row in them, which can in itself result in great performance gains over table variables. Of course, this also is a tiny amount of overhead. Though this overhead is likely worth it and not noticeable if your query’s runtime is longer than one second.

for example, you can create the PRIMARY KEY on a temp table.

IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
 DROP TABLE #tempTable

CREATE TABLE #tempTable 
(
   Id INT PRIMARY KEY
  ,Value NVARCHAR(128)
)

INSERT INTO #tempTable
VALUES 
     (1, 'first value')
    ,(3, 'second value')
    -- will cause Violation of PRIMARY KEY constraint 'PK__#tempTab__3214EC071AE8C88D'. Cannot insert duplicate key in object 'dbo.#tempTable'. The duplicate key value is (1).
    --,(1, 'first value one more time')


SELECT  * FROM #tempTable

Reference

https://stackoverflow.com/questions/6385243/is-it-possible-to-add-index-to-a-temp-table-and-whats-the-difference-between-c

DB Schema and Permissions

If application doesn’t run and complains about permissions e.g. Execute permission, we need to check schema permissions; One of the reason, this happens when we restore database.

Suppose our user name is FMUser and dataase name is FMStoreDev. Under Login Properties of FM User – Security;

UNDER DATABASE -> USER -> SECURITY -> FMUser, CHECK THIS (in case SPROC does not run);

On Securables tab, click Search;

Select first option and click ok;

Click on object types and select schema;

Click on Browse and select your schema;

Select FM schema and grant execute permission to FMUser;

Since this is schema bound, go to securityàschema. Schema name should be your custom schema and owner should be dbo;

Click on Permissions tab. Make sure execute permission is selected for FM user;

These changes will help to solve database execute permission problem.

This is an alternative of remapping the users (to restore permission) after restoring DB;

USE FMStoreDev;  
GO  
EXEC sp_change_users_login 'Update_One', 'FMUser', 'FMUser';  
GO  

SQL Server String Split and Agg function

Here is a short example. Let’s say we have this sample JSON output. We would like to split it on “}” characters;

DECLARE @json NVARCHAR(MAX) = '[{
  "text": "Books",
  "nodes": [{
	"text": "Programming Books",
	"rackNumber": "1",
	"moreInfo": [{
			"text": "C# book",
			"price": "$49.99"
		  },
		  {
			"text": "React book",
			"price": "$40.99"
		  }
		]},
		{
	"text": "Database Books",
	"rackNumber": "2",
	"moreInfo": [{
			"text": "SQL Server 2022",
			"price": "$52.99"
  		  },
		  {
			"text": "Maria database",
			"price": "$20.99"
		  }
		]}
	]
}]
'

SQL Server has some handy functions, STRING_Split, to split and to re-create, STRING_AGG. I am going to demo it here;

----split strings to make changes
IF OBJECT_ID('tempdb..#ADDF') IS NOT NULL DROP TABLE #ADDF
;WITH CTE AS
 (
	SELECT value  from STRING_Split(@json, '}')
 )
 SELECT * INTO #ADDF FROM CTE;
 //show me the JSON. The string should be split in two parts
 SELECT * FROM #ADDF

 //TODO..do whatever you want to do with the string JSON array

--return modified string. You should see a single line JSON output here 
SELECT STRING_AGG(value, '}') AS jsonBack
FROM #ADDF;

Happy coding.