Thursday, 27 February 2014

How to use the Import and Export Column Transforamtion in SSIS

How to use the Import and Export Column in SSIS:
Today I am going to demonstrate where and how to use the Import and export column transformation SSIS.
Before going to the demo first I want to explain the use of Import and Export column.
Import and Export columns are mainly used for grabbing of data which are like images, files (Docs, PDF etc…..) to any type of database and again if you want to move the same data to some other folder or somewhere Location in the file system.
It will store the data in encrypted format using the Import column and again it will easily decrypt the data to actual files using the export columns.
I Hope you got some idea on import and export column.
Let’s begin the demo on Export and Import Column Transformation. Follow the Below Steps..
Step1:
Create one table with below schema
CREATE TABLE [dbo].[Export](
      [ID] [int] IDENTITY(1,1) NOT NULL,
      [Location] [varchar](150) NULL,
      [FileContent] [varchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

This table is used for the storing of file location and file content through Export column.  
Step2:
After creating the table open the BIDS and create a Integration project and open a new Package.
Step3:
In this package drag and drop the Execute SQL Task from Control flow Items.
This is used for before loading the data into the table I am truncating the table and reloading the table with the data


And configure the Execute SQL Task like below

Connection: here give the connection string of your local database which is having the Export table.
SQLSource Type: Set the property to direct input
SQLStatement: TRUNCATE TABLE dbo.Export
After setting these three properties click on Ok.










Step4:
Now Drag the Foreach Loop Container to the Package.
This for getting the each file name or image name and location.

And configure the For each Loop Container like below.


After opening the For Each Loop Container go to the collection Tab and fill the all fields like shown in above.
In Folder field give the path of the files location which is going to store in the database table.
In Files keep *.* this means it will grab the any type of file.
In Retrieve file name select the fully qualified.

In the above image I am showing the path and files which are going to store in the database table using the Import Column.
Now go to the variable mapping tab and create one user defined variable at package level. And use that variable for storing the file location fully qualified name to the variable.

And keep the Index to 0, and then click on OK.

Step5:
Now Drag the Another Execute SQL Task to the For Each Loop Container for storing the Variable value to the Table.


Same way set the all the Properties like first Execute Task only change is SQL Statement
SQL Statement: Insert into dbo.Export(Location) Values(?)
 Then after this go to the Parameter Mapping tab in that tab choose the variable which is created at for each loop container.


And configure this tab as shown like in above image.
After this click on ok.

Step6:
After this drag the Data Flow Task to the package.
In this dataflow drag the Ole DB Source and select the Export table

After selecting the table click on ok.
Step7:
Now drag the Import Column Transformation to the Data Flow Task


After this configure the Import Column Tab
Go to Input columns tab and select the Location and then click on OK.


Step8:
Now drag the Ole DB Command component to the Data flow task.

In this tab set the connection to the database connection.

In this component tab set the few things
SQL Command: UPDATE dbo.Export SET FileContent = ? WHERE Id = ?

Now go to the next tab column mapping

In this u will found one new column that is File Location and Map the Id and File Location to Param_1 and Param_0 respectively.
Now click on ok.
Till here data is imported to the table.

Step9:
Now Data is ready at database level now we need store this data back to the Folder for that we need to use the Export column Transformation.
Here also drag the Ole DB Source to the Data Flow and configure the table to Export Table.
Next take the Derived Column to the data flow for change the path of the file to storing the data to the folder destination.


Here use the Replace for to change the Source to Destination then click the ok.

Step10:
Now drag the Export Column transformation to the Data flow and set the properties.
Here set the properties like
Extract Column to  the File content column
And File path to the Destination which is created at Derived column.
And check the force truncate.
Then click on ok and now run the package.
Before package running


After running the Package.


               
This is all about the Import and Export Column in SSIS.

If you are unable to see the images let me know. i will share u the file.







Thursday, 12 September 2013

How To Find The Length of Data in Text and Ntext Data type :P


Hi All,

We have string function Len for finding the length of a string with VARCHAR data type.

We can’t use the LEN function for Text and Ntext data type.

For this we have another function Datalength. And we can use LEN also for this, but if u wants to use LEN u need to CAST the Text datatype column. See the below example.   

EX:   
DATALENGTH(text_column)
 LEN (CAST (text_column as varchar))

 Thanks For Visiting Blog.

Tuesday, 10 September 2013

Generating Multiplication Matrix Dynamically In SQL

Hi All,

After a long time, i come up with some new stuff in T-SQL

ie. Generating Multiplication Matrix Dynamically By Passing a Input as Matrix size ,See below images so that you can easily understand :) .




Script For Above OUTPUT: (Here i am passing the 12 as my input)

DECLARE @k int = 12
IF(@K>0)
BEGIN
DECLARE @FirstPart VARCHAR(max),@SecondPart VARCHAR(MAX) 
SET @FirstPart  = 'DECLARE @i int = 1,@j int = 1,@k int = '+CAST(@k AS VARCHAR)+'  DECLARE @table table ([ ] int'
SET @SecondPart = 'SELECT @i AS [ ], @i*@j AS [1]'  
DECLARE @l INT= 1
WHILE(@l<=@k)
BEGIN

SET @FirstPart = @FirstPart+',['+CAST(@l AS VARCHAR)+'] INT'

IF(@l<@k)
BEGIN
SET @SecondPart = @SecondPart + ',@i*(@j+'+CAST(@l AS VARCHAR)+') AS ['+CAST(@l+1 AS VARCHAR)+']'

END
SET @l= @l+1

END
SET @FirstPart = @FirstPart+')'


DECLARE @ThirdPart VARCHAR(MAX) =
'WHILE (@i<=@k)
BEGIN
INSERT INTO @table '
+@SecondPart+
'SET @i = @i+'+CAST(1 AS VARCHAR)+'

END
SELECT * FROM @table
'

EXEC(@FirstPart+@ThirdPart)

END
ELSE
SELECT 'Sorry We Cant Process This Request' AS Alert



OUTPUT:




 Thanks For Visiting Blog.

Wednesday, 10 July 2013

Deleting Duplicate Records From The Table in Different Way


Hi All,

Here I am going to explain how to DELETE duplicate records from table using CTE and Row_Number()

Table and Data Script :
CREATE TABLE [dbo].[Deleting Duplicate Record](
[Id] [int] NULL,
[Name] [varchar](50) NULL,
[Department] [varchar](50) NULL,
[CreatedDate] [datetime] NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (1, N'Santosh', N'BI', CAST(0x0000A1F701592FF6 AS DateTime))
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (1, N'Santosh', N'BI', CAST(0x0000A1F70159578E AS DateTime))
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (2, N'Santosh', N'MSSQL', CAST(0x0000A1F701597B6E AS DateTime))
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (2, N'Santosh', N'MSSQL', CAST(0x0000A1F7015987FC AS DateTime))
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (3, N'Krishna', N'JAVA', CAST(0x0000A1F70159BC02 AS DateTime))
INSERT [dbo].[Deleting Duplicate Record] ([Id], [Name], [Department], [CreatedDate]) VALUES (4, N'Krishna', N'SPRING', CAST(0x0000A1F70159D1F6 AS DateTime))
GO


Solution 1: Here i am Using CreatedDate column for order to give row number , by default it take ASC order.

;WITH Duplicate as
(SELECT *,row_number() OVER(partition by Id,Name,Department order by CreatedDate) as Numbering  from [Deleting Duplicate Record])

DELETE FROM Duplicate WHERE Numbering>1

Here i am deleting the records based on Createddate,means which row is inserted last that one is going to delete.

This is common and every one can DELETE rows based on the created date but I want to DELETE rows WITH OUT using the Createddate column.

Solution 2: Here i am Deleting the Duplicate rows WITH OUT using the CreatedDate Column,and deleting in inserted order only.

;WITH Duplicate as
(SELECT *,row_number() OVER(partition by Id,Name,Department order by (SELECT 0)) as Numbering  from [Deleting Duplicate Record])

DELETE FROM Duplicate WHERE Numbering>1

Here i am just adding SELECT 0 to Order by Clause so that it will give u the row number as inserted order.

this is little bit different way to delete duplicate records from the table.


Thanks For Visiting Blog.



Friday, 5 July 2013

With Out USING Union We Can Get the SUM Of Rows At Group Level and Whole Records At Bottom



Hi All,


Most of the time we will use the UNION to get the whole sum of records group by some column and showing at each group level sum.

But in this Case we need to hit the database two times , then also  we cant get the whole Records sum, for that again we need to add one more UNION.

So to over come this situation i used the ROLLUP. 

SCRIPT:

CREATE TABLE [dbo].[SUMING_Data](
[ID] [int] NULL,
[Amount] [int] NULL,
[NAME] [varchar](5) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (1, 100, N'x')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (1, 200, N'y')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (1, 300, N'z')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (2, 200, N'p')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (4, 200, N'q')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (2, 400, N'r')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (4, 600, N's')
INSERT [dbo].[SUMING_Data] ([ID], [Amount], [NAME]) VALUES (3, 100, N't')

GO

SELECT * FROM [SUMING_Data]


in the above screen each color represent the different group name

But i want out put like each group level sum and all rows sum at the bottom.

for that previously i used UNION 

THIS IS USING UNION, HERE WE NEED TO WRITE TWO QUIRES


SELECT * FROM SUMING_Data
UNION
SELECT [ID],sum([Amount]),'Total' FROM SUMING_Data GROUP BY [ID]




Problem:

In the above screen shot red line showing the individual amount ,and blue one showing the Group Total 
But if i want all the records total , again i need to add one more UNION and one more hit to the database 

To overcome the above problem follow the below query.

Solution:

WITH OUT USING UNION ALSO WE CAN DO THIS ,USING THIS OUR QUERY IS VERY SIMPLE

Method 1:

SELECT  ID, SUM(AMOUNT) Amount, ISNULL(NAME,'TOTAL')GroupLevel  
FROM    SUMING_Data  
GROUP BY  ID,NAME 
WITH  ROLLUP

if we are using  isnull query we will get below out put:



Method 2:

SELECT ID, SUM(AMOUNT) Amount, GROUPING(name) GroupLevel 
FROM  SUMING_Data  
GROUP BY  ID,name 
WITH  ROLLUP

If we are using the Grouping we will get below out put ,here 0 indicates the individual amount and 1 indicates the SUM amount and at the bottom u can find the Total Amount in both Outputs



This is all about RollUP in SQL 

Thanks For Visiting Blog.


Friday, 28 June 2013

How simple we can send Mail using SQL Server



Hi All ,

We can send email through SQL Server also,for this follow some simple steps.

Step 1:
          Open SQL Server Management Studio and then connect to server and go to  management tab in that right click on Database Mail then click on configure mail,then click on setup database mail then click on next.



Step 2:
           Now u can give profile name for the email and create new SMTP account , for that click on add and click on new account fill accordingly as shown in below image.


Step 3:
           Now manage profile Security and select u r's profile then click next and next and finsh.


Step 4:
           Now test the mail ,for that again right click on Database Mail and select Send Test E-Mail .and u will get email.


Thanks For Visiting Blog.

Thursday, 27 June 2013

HOW TO USE CACHE TRANSFORM


            
           HOW EFFECTIVELY WE CAN USE CACHE TRANSFORM IN SSIS


Scenario:
  • While using look up transformation, ur pointing to look up table which is available in database for checking the source data table.
  • If u r checking for single source data table there is no problem because it only one time hit the database for getting the data from lookup table.
  • But in real time scenario u may use more than one source table to look up on one single lookup table, in this situation database hits will increase(that many times it will hit the database for lookup table) and performance will automatically decreases.

To overcome this situation I used cache transformation in SSIS Data flow task.

Explanation:

Step 1:
           Drag and drop data flow task in control flow, open data flow and add oledb source and make all the database connection then choose the look up table from that database.

Step 2:
           Now add cache transformation to data flow and map oledb to cache and configure the cache connection manager like below.


        Double click on cache transformation and check the use file cache then select the .caw (cache file) file (for getting this .caw file open notepad and save as .caw extension.).


      After that go to column tab in cache connection manager to select one index column.


Step 3:
           Now add another data flow task in control flow and then open data flow and add oledb source for getting the source table. Add look up transformation to data flow and then map source table to look up then open look up and in cache mode select full cache then in connection type use cache connection manager.



Step 4:
           Now go to connection tab in look up and select that cache connection manager (already created connection to .caw file).

     
     Then go to columns tab and map on which column u r going to look up.

Then click on OK.
Now u can redirect the rows (matched and not matched) to desired destinations.


Finally:

             U can use same cache file so many times u want to look up with source tables so that it will hit the database only single time. 


In this above screen shot u can find the three different sources but having the one look up file so this reducing the database hits ,and increases the performance. 

THANKS FOR VISITING BLOG