JavaScript Primitive (value) types

There are five primitive types; Number, String, Boolean, null and undefined. Their values are stored directly in the variable for a given context. We can use typeof operator to identify primitive types with exception of null, which must be compared directly against the special value null.

color1 = "red"
color2 = color1
color2 = "green"
Inspect type of variables;
typeof(color1)		//string
typeof(color2)		//string
typeof(10)		//number
typeof(true)		//Boolean
typeof(undefined)	//undefined

This is tricky value type.

typeof(null)		//object

This has been acknowledge by TC39 as an error, the committee that design and maintain JavaScript.

The best way to determine if a value is null is to compare it against null directly;

value = null
value === null	//true
value = false	//false
value === null
“10” == 10	//true
“10” === 10	//false

When you use the double equals, the string “10” and the number 10 are considered equal because the double equals converts the string into a number before it makes the comparison. The triple equals operator doesn’t consider these values equal because they are two different types.

console.log(undefined == null)		//true
console.log(undefined === null)		//false

Null is a data type that has a single value null. When you’re trying to identify null, use triple equals so that you can correctly identify the type.

Strings, numbers and Booleans primitive types have methods. The null and undefined types have no methods. Strings, in particular have numerous methods.

var name = "Shahzad Khan"
var lowerName = name.toLowerCase()	//convert to lowercase
var firstLetter = name.charAt(0)		////get first character
var middleOfName = name.substring(2,5)	//get character 2-4
s.length		//return number of characters
var count  =10
var fixedCount = count.toFixed(2)	//convert to “10.0”
var hexCount = count.toString(16)	//convert to “a”
var flag = true;
var stringFlag = flag.toString()		//convert to “true”

Primitive values are not objects though they have methods.

“Not a number” is used in cases where an arithmetic operation produces a result that is not a number.

a = 9/undefined		//display NaN
typeof a			//display number

Equality operator (>, <, ==, !=, >=, <=) return Boolean as their result.

var f = 10
var t = 20
f != t		//return true

The undefined data type is returned when we access a property on an object that does not exists or use it before it is declared or before it is assigned a value;

typeof(z)		//variable is not declared

There are some values in these types which evaluates to true and some evalues to false;

false, 0 (zero), “” (empty string), null, undefined, NaN (technically NaN is not true/false. It can only be detected with isNaN function)

To test this;

0 == false

All other values represent true values.

When evaluating the value of variables in conditional statement we do this;

var a = null
if (a == undefined || a == null)
{ a = 1;
}

It is possible to simple write

var a = null
if (!a) {
a = 10
}

Likewise, if a variable has a value, we can write something like;

If (a){
console.log(a)
}

This shortcut is useful and extensively used in JavaScript code.

JavaScript is a dynamically typed language. In a statically typed language, the compiler can perform type checking, if a variable is defined to store an integer. JavaScript variables derive their types based on the values they are assigned at run-time. Variables can change their type if they are assigned a new value.

As a result, it is not possible to perform static type checking in JavaScript.

Consider this example;

function add(v1, v2) { 
return v1 + v2 
}

If we invoke this function as;

add(1,1)		//return 2

This is expected result. We can also invoke this function as;

add("1",1)		//return 11

This is unexpected because JavaScript has performed string concatenation between number and the string. This is one of the reasons why the typeof operator is so important. It allows the function to be rewritten as;

function add(v1, v2) {
    if (typeof v1 === "number" && typeof v2 === "number") {
    return v1 + v2;
} else {
    throw "both argumens must be number";
    }
}

Now try to pass arguments;

add(“1”,1)		//return “both arguments must be number

On a side note, using the “+” operator on different data types produces a variety of random results;

{} + []		//return 0
[] + {}		//return object
Even using the “+” operator on the same data types produce meaningless results;
[] + [] 		//return empty string
{} + {}		//return object array

Handling special characters in the data using SQL server

I got a situation where i need to handle special characters in incoming data, for example “USD100” or “100USD”. Here is the work around;

If we try like this, it will give back NULL value;

DECLARE @Value NVARCHAR(50) = '100USD';
SELECT TRY_CAST(@Value AS decimal);

The work around is this;

DECLARE @Value NVARCHAR(50) = '100USD';
SELECT TRY_CAST(REPLACE(REPLACE(REPLACE(@Value, 'USD', ''), '#', ''), '$', '') AS decimal);

Resources

https://docs.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql?view=sql-server-ver15

https://www.sqlshack.com/replace-ascii-special-characters-sql-server/

Uncaught Reference Error: jQuery is not defined

You are getting this error in your ASP.NET Core application. Your Layout page has the proper reference;

    <script src="~/lib/jquery/dist/jquery.js"></script>

When you try to navigate to your content page where you want to use jQuery;

$(function () {
      alert("Test");

You see above error. Dammit. What’s wrong. jQuery library is referenced in Layout page and suppose to work in content page.

Chances are that you have not defined script section in your Layout page.

@section Scripts {
  <script>
    $(function () {
      alert("Test"); 
    });
  </script>
}

This will solve the problem and pain will go away.

Use Powershell to export SQL Blob column data

Blob data can be exported using PowerShell in a simple way, by querying the data with Ado.Net – SqlClient  and then using a BinaryWriter  to write it on local hard drive.

This is how we can use PowerShell to export SQL Blob data to file.

## Export of "larger" Sql Server Blob to file            
## with GetBytes-Stream.         
# Configuration data            
$Server = ".\SQL105CTP3";         # SQL Server Instance.            
$Database = "ToDO";            
$Dest = "D:\Export\";             # Path to export to.            
$bufferSize = 8192;               # Stream buffer size in bytes.            
# Select-Statement for name & blob            
# with filter.            
$Sql = "SELECT [FileName]
              ,[Document]
        FROM Production.Document
        WHERE FileExtension = '.xlsx'";            
            
# Open ADO.NET Connection            
$con = New-Object Data.SqlClient.SqlConnection;            
$con.ConnectionString = "Data Source=$Server;" +             
                        "Integrated Security=True;" +            
                        "Initial Catalog=$Database";            
$con.Open();            
            
# New Command and Reader            
$cmd = New-Object Data.SqlClient.SqlCommand $Sql, $con;            
$rd = $cmd.ExecuteReader();            
            
# Create a byte array for the stream.            
$out = [array]::CreateInstance('Byte', $bufferSize)            
            
# Looping through records            
While ($rd.Read())            
{            
    Write-Output ("Exporting: {0}" -f $rd.GetString(0));                    
    # New BinaryWriter            
    $fs = New-Object System.IO.FileStream ($Dest + $rd.GetString(0)), Create, Write;            
    $bw = New-Object System.IO.BinaryWriter $fs;            
               
    $start = 0;            
    # Read first byte stream            
    $received = $rd.GetBytes(1, $start, $out, 0, $bufferSize - 1);            
    While ($received -gt 0)            
    {            
       $bw.Write($out, 0, $received);            
       $bw.Flush();            
       $start += $received;            
       # Read next byte stream            
       $received = $rd.GetBytes(1, $start, $out, 0, $bufferSize - 1);            
    }            
            
    $bw.Close();            
    $fs.Close();            
}            
            
# Closing & Disposing all objects            
$fs.Dispose();            
$rd.Close();            
$cmd.Dispose();            
$con.Close();            
            
Write-Output ("Finished");

Server and Database Name is hard coded in this script. To make it parametrize, add these in first line;

param ($servername='.\SQL105CTP3', $databasename = 'ToDo',  $envname='Dev')
       

and then change these variables to get the value from parameters;

$Server = $servername;         # SQL Server Instance.            
$Database = $databasename;

Save the script and run it like this;

.\BLOBimagesCopy.ps1 -servername '(local' -databasename 'ToDO' -envname 'dev' 

Resources

https://social.technet.microsoft.com/wiki/contents/articles/890.export-sql-server-blob-data-with-powershell.aspx

https://www.red-gate.com/simple-talk/sysadmin/powershell/how-to-use-parameters-in-powershell/

Filestream in SQL Server

We want to save multiple type of data, documents, image, audio or video. BLOB data type can be used to store this kind of data but it’s limited to 2GB in size. BLOB data slows down database performance and takes significant resources to read. For detail, refer here ;

https://shahzadblog.com/2021/03/29/store-and-retrieve-blob-data-without-file-stream/

Filestream allows storing these large documents onto the file system itself. There is not limit on storage as compared to 2GB in BLOB storage. Application can access these files using NTFS streaming API. Filestream is not a SQL Server data type to store data.

Filestream does not use the buffer pool memory for caching these objects. If we cache these large objects in the SQL Server memory, it will cause issues for normal database processing. Therefore, FILESTREAM provides caching at the system cache providing performance benefits without affecting core SQL Server performance.

This is how to enable Filestream feature in SQL server;

Since I will be doing a lot of inserts/update/delete so I am going to change recovery mode of selected database to “Simple”.

Make sure that you have enable FILESTREAM on database level;

Checking the status of Filestream;

--check filestream status
USE master
Go
EXEC sp_configure
Go

0 = FILESTREAM is disabled.

1 = only T-SQL access to FILESTREAM data is allowed.

2 = T-SQL access and local streaming access is allowed.

3 = T-SQL access and local and remote streaming access is allowed.

You can also use this query to check status;

exec sp_configure 'filestream access level'

I am going to create my first table with filestream enabled;

--Create a table to work with Filestream
USE TODO
GO
CREATE TABLE [dbo].[Document](
	[Doc_Num] uniqueidentifier ROWGUIDCOL unique NOT NULL DEFAULT newsequentialid(),
	[Extension] [varchar](50) NULL,
	[FileName] [varchar](200) NULL,
    [FileSize] numeric(10,5),
	[Doc_Content] [varbinary](max) FILESTREAM
) ON [PRIMARY] 

Oops. I got this error;

I haven’t enabled Filestream feature in SQL server. Open SQL Server Configuration manager. Right click on SQL Server Services, properties and FileSTREAM.

Restart SQL server service.

Re-run table creation query. This time you will see this error;

What it means that we have not created File group. Remember Filestream require its own File groups. Right click on your database;

After creating filegroup, open database properties again and click on files. Click on Add to add new file and connect with previously created filegroup.

Navigate to your filesystem path and you should see this;

As shown in the above image, the $FSLOG directory and the filestream.hdr file have been created. $FSLOG is like SQL server T-Log, and filestream.hdr contains metadata of FILESTREAM. Make sure that you do not change or edit those files.

re-run our table creation sql script. This time table should be created successfully.

It’s time to add a new excel document in this table. Run following sql script;

--insert a new excel document
INSERT [dbo].[Document1] ([Extension] ,[FileName], [FileSize], [Doc_Content] )
SELECT 'xlsx', 'SampleExcelDoc.xlsx', 10, [Doc_Data].*
FROM OPENROWSET 
    (BULK 'D:\SimpleExcelDoc.xlsx', SINGLE_BLOB)  [Doc_Data]

Run this statement;

SELECT * FROM [dbo].[Document]

Here is the file on file system;

This file can be opened directly with compatible application program, in this example Excel. Right click, open with and select Excel. When we insert the document using the FILESTREAM feature, SQL Server copies the file into the FILESTREAM path. It does not change the file properties.

To access Filestream data using managed API, we need to figure out Filestream content path. Here is the script;

--access Filestream data using managed API
SELECT Doc_Content.PathName() AS FilePath, * 
FROM [dbo].[Document]

You should see these results;

To delete a document from Filestream container, run this;

--delete files from Filestream container.
USE TODO
GO
DELETE FROM dbo.Document
WHERE Doc_Num = '658020B3-1690-EB11-90BB-00155D717606'

In this case, we can see that file still exists for FILESTREAM document. SQL Server removes the old files using the garbage collection process. This process will remove the old file if it is no longer required by the SQL Server. Do you remember about a folder, $log into the FILESTREAM path. It works similar to a transaction log in the SQL Server database.

SQL Server has a particular internal filestream_tombstone table, which contains an entry for this old file.

SELECT * FROM sys.internal_tables where name like ‘filestream_tomb%’

The garbage collection process removes the old file only if there is no entry for the file in the filestream_tombstone table. We cannot see the content of this filestream_tombstone table. However, if you wish to do so, use the DAC connection.

The database recovery model plays an essential role in the entry in the filestream_tombstone table.

  • Simple recovery mode: In a database in simple recovery model, the file is removed on next checkpoint
  • Full recovery model: If the database is in full recovery mode, we need to perform a transaction log backup to remove this file automatically

We can run the garbage collector process file manually to remove the files from the container before the automatic garbage collector process cleans the file. We can do it using ‘sp_filestream_force_garbage_collection’ stored procedure.

Resource

https://docs.microsoft.com/en-us/sql/relational-databases/blob/filestream-sql-server?view=sql-server-ver15

https://www.pmichaels.net/tag/default-filestream-filegroup-is-not-available-in-database/

https://www.sqlshack.com/managing-data-with-sql-server-filestream-tables/

https://www.pmichaels.net/tag/default-filestream-filegroup-is-not-available-in-database/

https://www.jitendrazaa.com/blog/sql/sqlserver/export-documents-saved-as-blob-binary-from-sql-server/