Showing posts with label rows. Show all posts
Showing posts with label rows. Show all posts

Thursday, March 29, 2012

'Default' NON_EMPTY_BEHAVIOR behavior...

Hi,

The below query returns 20 rows when run against our production DW. When the NON_EMPTY_BEHAVIOR is removed, it returns 10 rows. Note the lack of a member list on the N_E_B. Anyone else seen this, or have any ideas as to why it's causing me a 'problem'?

WITH MEMBER

[Container].[Container Type].[AllCalc] AS [Container].[Container Type].[All],

NON_EMPTY_BEHAVIOR={}

SELECT

([Measures].[Container Load Count]) ON COLUMNS,

NON EMPTY

[Trade].[Trade].&[EAST_AFRICA] * [Container].[Container Number].[All].Children ON ROWS

FROM [DW_SM]

WHERE (

<various slicers>,

[Container].[Container Type].[AllCalc]

)

Thanks,

Will.

You should remove NON_EMPTY_BEHAVIOR from your calculated member, because it is set incorrectly. Specifying an empty set for NEB is FAAP always wrong. And really to get any benefit out of it in the form you are trying to use it, it should've only been used on calculated measure, not on calculated member.

Wednesday, March 21, 2012

Deep copy of child rows + referential integrity?

I have a database of basically this structure:

tblG

-GKey (PK)

tblA

-AKey (PK)
-GKey (FK -> tblG)
-data

tblB

-BKey (PK)
-GKey (FK -> tblG)
-data

tblAB

-AKey (FK -> tblA)
-BKey (FK -> tblB)
-data

I'm trying to write a procedure that will take a tblG.GKey and
clone all of its children rows in tblA and tblB (with new PKs),
and then clone all of their children rows in tblAB (using the
new PKs for tblA and tblB).

Does anybody have any suggestions on how this might be done sanely?

ThanksWhat are the primary keys? Are they identity columns? How are they generated? Also, what version of SQL Server are you using?|||I'm in SQL Server 2005.

The primary keys (marked PK) are identity columns and are generated automatically.
Using "select IDENT_CURRENT('tblA')" works as I would expect it to.

I feel like I need to traverse all the rows in tblA and tblB that need to be copied, insert them, then grab the IDENT_CURRENT off that row and insert into a temporary table along with the original. Then go through all the rows in tblAB and replace the FKs with look-ups from the temporary table. I just have no idea how I could implement that.|||

In my mind it's still not totally clear what you're trying to do.

Is there a chance you could post a short example (around 4 rows from each table) of the data you'd expect to see in your tables before and after the changes have been made?

Thanks
Chris

|||Sure. Here are the tables before the copy:

"tblG"
-
GKey |
-
1

"tblA"
-
AKey | GKey | data
1 | 1 | abc
2 | 1 | def
3 | 1 | ghi

"tblB"

-

BKey | GKey | data

1 | 1 | jkl

2 | 1 | mno

3 | 1 | oqr

"tblAB"

-

AKey | BKey | data

1 | 3 | tuv

1 | 2 | xyz

3 | 1 | aaa
2 | 1 | bbb

And now after the copy:

"tblG"

-

GKey |

-

1

2

"tblA"

-

AKey | GKey | data

1 | 1 | abc

2 | 1 | def

3 | 1 | ghi

4 | 2 | abc

5 | 2 | def

6 | 2 | ghi

"tblB"

-

BKey | GKey | data

1 | 1 | jkl

2 | 1 | mno

3 | 1 | oqr

4 | 2 | jkl

5 | 2 | mno

6 | 2 | oqr

"tblAB"

-

AKey | BKey | data

1 | 3 | tuv

1 | 2 | xyz

3 | 1 | aaa

2 | 1 | bbb

4 | 6 | tuv

4 | 5 | xyz

6 | 4 | aaa

5 | 4 | bbb

Does that make sense?|||

Although I generally favour set-based approaches, I can't think of set-based method that wouldn't require a modification to your existing tables. The cursor-based example below returns the results as you stipulated in your previous post.

Essentially, one row at a time is inserted into tblA and the old and new IDENTITY values stored away in a table variable. The same is done for tblB. It's then a simple matter of joining the two table variables onto tblAB to copy the relationships that exist in tblAB.

If you desperately need a set-based method, and can make changes to your tables, then you could add an extra column to both tblA and tblB to store the ID of the row from which the current row was copied. As long as this column is maintained during subsequent INSERTs then you can simply modify and use the final INSERT statement of the code I've included below to populate tblAB.

Chris

/*

CREATE TABLE dbo.tblG

(

GKey INT

)

CREATE TABLE dbo.tblA

(

AKey INT IDENTITY(1, 1) PRIMARY KEY,

GKey INT,

Data VARCHAR(3)

)

CREATE TABLE dbo.tblB

(

BKey INT IDENTITY(1, 1) PRIMARY KEY,

GKey INT,

Data VARCHAR(3)

)

CREATE TABLE dbo.tblAB

(

AKey INT,

BKey INT,

Data VARCHAR(3)

)

INSERT INTO dbo.tblG

VALUES(1)

SET IDENTITY_INSERT dbo.tblA ON

INSERT INTO dbo.tblA(AKey, GKey, Data)

SELECT 1, 1, 'abc' UNION

SELECT 2, 1, 'def' UNION

SELECT 3, 1, 'ghi'

SET IDENTITY_INSERT dbo.tblA OFF

SET IDENTITY_INSERT dbo.tblB ON

INSERT INTO dbo.tblB(BKey, GKey, Data)

SELECT 1, 1, 'jkl' UNION

SELECT 2, 1, 'mno' UNION

SELECT 3, 1, 'pqr'

SET IDENTITY_INSERT dbo.tblB OFF

INSERT INTO dbo.tblAB(AKey, BKey, Data)

SELECT 1, 3, 'tuv' UNION

SELECT 1, 2, 'xyz' UNION

SELECT 3, 1, 'aaa' UNION

SELECT 2, 1, 'bbb'

*/

DECLARE @.OldGKey INT

DECLARE @.NewGKey INT

DECLARE @.OldAKey INT

DECLARE @.NewAKey INT

DECLARE @.OldBKey INT

DECLARE @.NewBKey INT

DECLARE @.AKeyNewOld TABLE (OldAKey INT, NewAKey INT)

DECLARE @.BKeyNewOld TABLE (OldBKey INT, NewBKey INT)

DECLARE @.Data VARCHAR(3)

SELECT @.OldGKey = MAX(GKey), @.NewGKey = MAX(GKey + 1)

FROM dbo.tblG

INSERT INTO dbo.tblG

VALUES(@.NewGKey)

DECLARE curAKeys CURSOR FAST_FORWARD LOCAL FOR

SELECT AKey, Data

FROM dbo.tblA

WHERE GKey = @.OldGKey

ORDER BY AKey

OPEN curAKeys

FETCH NEXT FROM curAKeys INTO @.OldAKey, @.Data

WHILE @.@.FETCH_STATUS = 0

BEGIN

INSERT INTO tblA(GKey, Data)

VALUES(@.NewGKey, @.Data)

SET @.NewAKey = SCOPE_IDENTITY()

INSERT INTO @.AKeyNewOld(OldAKey, NewAKey)

VALUES(@.OldAKey, @.NewAKey)

FETCH NEXT FROM curAKeys INTO @.OldAKey, @.Data

END

CLOSE curAKeys

DEALLOCATE curAKeys

DECLARE curBKeys CURSOR FAST_FORWARD LOCAL FOR

SELECT BKey, Data

FROM dbo.tblB

WHERE GKey = @.OldGKey

ORDER BY BKey

OPEN curBKeys

FETCH NEXT FROM curBKeys INTO @.OldBKey, @.Data

WHILE @.@.FETCH_STATUS = 0

BEGIN

INSERT INTO tblB(GKey, Data)

VALUES(@.NewGKey, @.Data)

SET @.NewBKey = SCOPE_IDENTITY()

INSERT INTO @.BKeyNewOld(OldBKey, NewBKey)

VALUES(@.OldBKey, @.NewBKey)

FETCH NEXT FROM curBKeys INTO @.OldBKey, @.Data

END

CLOSE curBKeys

DEALLOCATE curBKeys

INSERT INTO dbo.tblAB(AKey, BKey, Data)

SELECT akno.NewAKey,

bkno.NewBKey,

tab.Data

FROM dbo.tblAB tab

INNER JOIN @.AKeyNewOld akno ON akno.OldAKey = tab.AKey

INNER JOIN @.BKeyNewOld bkno ON bkno.OldBKey = tab.BKey

SELECT * FROM tbLG

SELECT * FROM tbLA

SELECT * FROM tbLB

SELECT * from tblAB

|||Works like a charm. Thanks a million

Monday, March 19, 2012

DecryptByCert performance

I have bunch of encrypted rows in the table and have stored procedure to select those rows.
It looks like this
SELECT CAST(DecryptByCert(Cert_ID('CertId'), field1) AS VARCHAR) AS f1,
CAST(DecryptByCert(Cert_ID('CertId'), field2) AS VARCHAR) AS f2,
CAST(DecryptByCert(Cert_ID('CertId'), field3) AS VARCHAR(255)) AS f3
FROM [table]

This stored procedure takes really long time even with hundreds of rows, so I suspect that I do something wrong. Is there any way to optimize this stored procedure?

Encryption/decryption by an asymmetric key (i.e. certificate) is much slower than when using a symmetric key.The recommended way to encrypt data is by using a symmetric key (or set of keys if you prefer) for protecting the data (i.e. AES or 3DES key), and protect these key using the certificate.

Another limitation you should consider is that asymmetric key encryption in SQL Server 2005 is limited to only one block of data. This means that the maximum amount of plaintext you can encrypt is limited by the modulus of the private key. In the case of RSA 1024 (in case you are using SQL Server 2005-generated certificates), the maximum plaintext is 117 bytes.

I am also including an additional thread from this forum that talk about this topic:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=114314&SiteID=1

Thanks a lot,

-Raul Garcia

SDE/T

SQL Server Engine

Friday, February 17, 2012

Debug stored procedure that uses comma delimited list to insert multiple records

I need some help with a stored procedure to insert multiple rows into a join table from a checkboxlist on a form. The database structure has 3 tables - Products, Files, and ProductFiles(join). From a asp.net formview users are able to upload files to the server. The formview has a products checkboxlist where the user selects all products a file they are uploading applies too. I parse the selected values of the checkboxlist into a comma delimited list that is then passed with other parameters to the stored proc. If only one value is selected in the checkboxlist then the spproc executed correctly. Also, if i run sql profiler i can confirm that the that asp.net is passing the correct information to the sproc:

exec proc_Add_Product_Files @.FileName = N'This is just a test.doc', @.FileDescription = N'test', @.FileSize = 24064, @.LanguageID = NULL, @.DocumentCategoryID = 1, @.ComplianceID = NULL, @.SubmittedBy = N'Kevin McPhail', @.SubmittedDate = 'Jan 18 2006 12:00:00:000AM', @.ProductID = N'10,11,8'

Here is the stored proc it is based on an article posted in another newsgroup on handling lists in a stored proc. Obviously there was something in the article i did not understand correctly or the author left something out that most people probably already know (I am fairly new to stored procs)

CREATE PROCEDURE proc_Add_Product_Files_v2
/*
Declare variables for the stored procedure. ProductID is a varchar because it will receive a comma,delimited list of values from the webform and then insert a row
into productfiles for each product that the file being uploaded pertains to.
*/
@.FileName varchar(150),
@.FileDescription varchar(150),
@.FileSize int,
@.LanguageID int,
@.DocumentCategoryID int,
@.ComplianceID int,
@.SubmittedBy varchar(50),
@.SubmittedDate datetime,
@.ProductID varchar(150)

AS
BEGIN


DECLARE @.FileID INT

SET NOCOUNT ON

/*
Insert into the files table and retrieve the primary key of the new record using @.@.identity
*/
INSERT INTO Files (FileName, FileDescription, FileSize, LanguageID, DocumentCategoryID, ComplianceID, SubmittedBy, SubmittedDate)
Values
(@.FileName, @.FileDescription, @.FileSize, @.LanguageID, @.DocumentCategoryID, @.ComplianceID, @.SubmittedBy, @.SubmittedDate)

Select @.FileID=@.@.Identity

/*
Uses dynamic sql to insert the comma delimited list of productids into the productfiles table.
*/
DECLARE @.ProductFilesInsert varchar(2000)

SET @.ProductFilesInsert = 'INSERT INTO ProductFiles (FileID, ProductID) SELECT ' + CONVERT(varchar,@.FileID) + ', Product1ID FROM Products WHERE Product1ID IN (' + @.ProductID + ')'

exec(@.ProductFilesInsert)

End
GO

I created your stored procedure locally, and did a PRINT of @.ProductFilesInsert and all looks good to me. Setting @.FileID = 0 instead of selecting its value to be @.@.Identity, this is what @.ProductFilesInsert contains, and that is syntactically correct:

INSERT INTO ProductFiles (FileID, ProductID) SELECT 0, Product1ID FROM Products WHERE Product1ID IN (10,11,8)

Your stored procedure is named proc_Add_Product_Files_v2, yet you are executing proc_Add_Product_Files. Is the problem simply that your are executing an old version of your stored procedure?|||

Terri:

Thanks! Sometimes it is so obvious. I am a little embarrassed that i did not catch that. :)

Thanks again,

Kevin

|||

Kevin.McPhail wrote:

Thanks! Sometimes it is so obvious. I am a little embarrassed that i did not catch that. :)

It wasn't obvious to me. The only reason I noticed was that exec proc_Add_Product_Files failed failed for me because I didn't have the original in place :-) I can't tell you how many times I've been burned by the very same thing.

For what it's worth, I am not a big fan of dynamic SQL, especially when an alternate methodology is possible. You could use this approach instead:

INSERT INTO
ProductFiles
(
FileID,
ProductID
)
SELECT
@.FileID,
Product1ID
FROM
Products
INNER JOIN
dbo.Split(@.ProductID,',') AS A ON Products.Product1ID = A.Element

There are many variations of a "split" function. Here's one that Dinakar provided in this thread:http://forums.asp.net/989365/ShowPost.aspx:

CREATE FUNCTION [dbo].[Split] ( @.vcDelimitedString nVarChar(4000),
@.vcDelimiter nVarChar(100) )
/**************************************************************************
DESCRIPTION: Accepts a delimited string and splits it at the specified
delimiter points. Returns the individual items as a table data
type with the ElementID field as the array index and the Element
field as the data
PARAMETERS:
@.vcDelimitedString - The string to be split
@.vcDelimiter - String containing the delimiter where
delimited string should be split
RETURNS:
Table data type containing array of strings that were split with
the delimiters removed from the source string
USAGE:
SELECT ElementID, Element FROM Split('11111,22222,3333', ',') ORDER BY ElementID
AUTHOR: Karen Gayda
DATE: 05/31/2001
MODIFICATION HISTORY:
WHO DATE DESCRIPTION
-- ---- ----------------
***************************************************************************/
RETURNS @.tblArray TABLE
(
ElementID smallint IDENTITY(1,1) not null primary key, --Array index
Element nVarChar(1200) null --Array element contents
)
AS
BEGIN
DECLARE
@.siIndex smallint,
@.siStart smallint,
@.siDelSize smallint
SET @.siDelSize = LEN(@.vcDelimiter)
--loop through source string and add elements to destination table array
WHILE LEN(@.vcDelimitedString) > 0
BEGIN
SET @.siIndex = CHARINDEX(@.vcDelimiter, @.vcDelimitedString)
IF @.siIndex = 0
BEGIN
INSERT INTO @.tblArray (Element) VALUES(@.vcDelimitedString)
BREAK
END
ELSE
BEGIN
INSERT INTO @.tblArray (Element) VALUES(SUBSTRING(@.vcDelimitedString, 1,@.siIndex - 1))
SET @.siStart = @.siIndex + @.siDelSize
SET @.vcDelimitedString = SUBSTRING(@.vcDelimitedString, @.siStart , LEN(@.vcDelimitedString) - @.siStart + 1)
END
END

RETURN
END|||

Thanks again Terri! I had been looking for a good understandable (not a sql guru) way to pass a delimited string or array to sql for inserts. I read through a couple articles i found that left my head spinning and decided to go with the old dynamic sql method since i at least understood what it did. Your example(and Dinakar and Karen's ) is exactly what i had been looking for.

Thanks,

Kevin