Saturday, February 25, 2012

Decimal and Number Formatting

I have a written a function where I am defining the return value as decimal. Now I need to do the formatting
to make the negative number look like (123.34%) and postive numbers as 123.34%. When I try to do this, I am
getting values like (123.3456788). How do I get rid of these extra decimals?
** Format(Fields!<FieldName>.Value,"#,###,##0.00;(#,###,##0.00)") is not working since I am using Sum for aggregation.
Thanks a lot for your help.Go to the Textbox Properties dialog in designer and specify
"#,##0.00%;(#,##0.00%);Zero" (without the double quotes) as the custom
format string.
Ravi Mumulla
SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Taps" <Taps@.discussions.microsoft.com> wrote in message
news:89671ED3-88A9-4FE3-B1FA-109DCDE50065@.microsoft.com...
> I have a written a function where I am defining the return value as
decimal. Now I need to do the formatting
> to make the negative number look like (123.34%) and postive numbers as
123.34%. When I try to do this, I am
> getting values like (123.3456788). How do I get rid of these extra
decimals?
> ** Format(Fields!<FieldName>.Value,"#,###,##0.00;(#,###,##0.00)") is not
working since I am using Sum for aggregation.
> Thanks a lot for your help.
>|||It works! Thanks a lot.
"Ravi Mumulla (Microsoft)" wrote:
> Go to the Textbox Properties dialog in designer and specify
> "#,##0.00%;(#,##0.00%);Zero" (without the double quotes) as the custom
> format string.
> Ravi Mumulla
> SQL Server Reporting Services
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "Taps" <Taps@.discussions.microsoft.com> wrote in message
> news:89671ED3-88A9-4FE3-B1FA-109DCDE50065@.microsoft.com...
> > I have a written a function where I am defining the return value as
> decimal. Now I need to do the formatting
> > to make the negative number look like (123.34%) and postive numbers as
> 123.34%. When I try to do this, I am
> > getting values like (123.3456788). How do I get rid of these extra
> decimals?
> >
> > ** Format(Fields!<FieldName>.Value,"#,###,##0.00;(#,###,##0.00)") is not
> working since I am using Sum for aggregation.
> >
> > Thanks a lot for your help.
> >
> >
>
>|||Taps wrote:
> I have a written a function where I am defining the return value as
> decimal. Now I need to do the formatting to make the negative number
> look like (123.34%) and postive numbers as 123.34%. When I try to do
> this, I am getting values like (123.3456788). How do I get rid of
> these extra decimals?
> ** Format(Fields!<FieldName>.Value,"#,###,##0.00;(#,###,##0.00)") is
> not working since I am using Sum for aggregation.
> Thanks a lot for your help.
Try setting format property at cell level (right click at cell or cells
then y properties set format)

decimal

Dear guys,
I have a data like 28.56 I want only 56, how can I get..
I can get 28 after converting in to Int datatype , what about 56, how can I
get , any buildin function available.
Thanks
NOOR
On Mon, 2 Aug 2004 23:41:31 -0700, Noor wrote:

>Dear guys,
>I have a data like 28.56 I want only 56, how can I get..
>I can get 28 after converting in to Int datatype , what about 56, how can I
>get , any buildin function available.
>Thanks
>NOOR
Hi Noor,
Do you want 0.56 or 56?
Several options are in the script below.
declare @.a decimal(4,2)
set @.a = 28.56
select @.a
select cast(@.a as int)
select @.a - cast(@.a as int)
select (@.a - cast(@.a as int)) * 100
select substring(cast(@.a - cast(@.a as int) as varchar),3,2)
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||Thanks Hugo
Noor
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:6jpug0h5eoa24b5urkg8sm8hdgnmtm6v5r@.4ax.com... [vbcol=seagreen]
> On Mon, 2 Aug 2004 23:41:31 -0700, Noor wrote:
I
> Hi Noor,
> Do you want 0.56 or 56?
> Several options are in the script below.
> declare @.a decimal(4,2)
> set @.a = 28.56
> select @.a
> select cast(@.a as int)
> select @.a - cast(@.a as int)
> select (@.a - cast(@.a as int)) * 100
> select substring(cast(@.a - cast(@.a as int) as varchar),3,2)
>
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)

Deciding which indexes to defragment

I want to start monitoring index fragmentation and defragmenting as
necessary. My indexes have never been defragmented so I am trying to
determine which ones I should start with. I've run DBCC SHOWCONTIG and
saved the results to a table. What criteria should I use to determine
which indexes most need defragmenting? Logical fragmentation? Extent
fragmentation? Table size?
BOL states that, "Logical Scan Fragmentation and, to a lesser extent,
Extent Scan Fragmentation values give the best indication of a table's
fragmentation level." So should I sort by highest logical scan
fragmentation first? And how should the size of the table affect my
decision? For example, I'm thinking it would be more useful to
defragment a table with 100 million rows and 30% fragmentation that a
table with 1 million rows and 50% fragmentation. Does that make sense?
ThanksHi
This is the place to start:
Microsoft QL Server 2000 Index Defragmentation Best Practices
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<pshroads@.gmail.com> wrote in message
news:1148424473.457642.7510@.i40g2000cwc.googlegroups.com...
>I want to start monitoring index fragmentation and defragmenting as
> necessary. My indexes have never been defragmented so I am trying to
> determine which ones I should start with. I've run DBCC SHOWCONTIG and
> saved the results to a table. What criteria should I use to determine
> which indexes most need defragmenting? Logical fragmentation? Extent
> fragmentation? Table size?
> BOL states that, "Logical Scan Fragmentation and, to a lesser extent,
> Extent Scan Fragmentation values give the best indication of a table's
> fragmentation level." So should I sort by highest logical scan
> fragmentation first? And how should the size of the table affect my
> decision? For example, I'm thinking it would be more useful to
> defragment a table with 100 million rows and 30% fragmentation that a
> table with 1 million rows and 50% fragmentation. Does that make sense?
> Thanks
>

Deciding which indexes to defragment

I want to start monitoring index fragmentation and defragmenting as
necessary. My indexes have never been defragmented so I am trying to
determine which ones I should start with. I've run DBCC SHOWCONTIG and
saved the results to a table. What criteria should I use to determine
which indexes most need defragmenting? Logical fragmentation? Extent
fragmentation? Table size?
BOL states that, "Logical Scan Fragmentation and, to a lesser extent,
Extent Scan Fragmentation values give the best indication of a table's
fragmentation level." So should I sort by highest logical scan
fragmentation first? And how should the size of the table affect my
decision? For example, I'm thinking it would be more useful to
defragment a table with 100 million rows and 30% fragmentation that a
table with 1 million rows and 50% fragmentation. Does that make sense?
ThanksHi
This is the place to start:
Microsoft QL Server 2000 Index Defragmentation Best Practices
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<pshroads@.gmail.com> wrote in message
news:1148424473.457642.7510@.i40g2000cwc.googlegroups.com...
>I want to start monitoring index fragmentation and defragmenting as
> necessary. My indexes have never been defragmented so I am trying to
> determine which ones I should start with. I've run DBCC SHOWCONTIG and
> saved the results to a table. What criteria should I use to determine
> which indexes most need defragmenting? Logical fragmentation? Extent
> fragmentation? Table size?
> BOL states that, "Logical Scan Fragmentation and, to a lesser extent,
> Extent Scan Fragmentation values give the best indication of a table's
> fragmentation level." So should I sort by highest logical scan
> fragmentation first? And how should the size of the table affect my
> decision? For example, I'm thinking it would be more useful to
> defragment a table with 100 million rows and 30% fragmentation that a
> table with 1 million rows and 50% fragmentation. Does that make sense?
> Thanks
>

Deciding which Database Server to install a new database

We have 3 Microsoft SQL 2000 servers. All of them have smiliar hardware configuration. I need to identify a database server to install a new database instance. I believe my decision should be based on available memory and processor workload at any given time. I need your help to identify the right server. Thank you.

Well, use perfmon to get a baseline from your servers and see how they are utilized. Make sure that you monitor a timespan which represents a common usage scenario. in addition make sure which licencse type your are using on the servers.

Jens K. Suessmeyer

http://www.sqlserver2005.de

Deciding between Access and MSDE

We are a small company writing primarily in-house
software, but also potentially developing software to
market.
We are converting from writing all of our software
completely in Access to writing in Visual Studio (C#). We
are confident this the best decision for both our in-house
and potentially marketable software.
What is a bit of a question mark is whether it is best to
go with Access/Jet databases (questionable since MS has
deprecated the Jet engine and is not upgrading anymore) or
MSDE. My feeling is that for future functionality and
scalablity MSDE (SQL Express 2005 when it comes out of
Beta and we upgrade to .NET 2.0) is a better choice.
Here are my concerns:
1) How common is it to hit the 2GB per database limit?
That sounds very large to me and seems unlikely to be a
problem except in extremely large applications. We are not
storing pictures or documents, only standard data types
(text, int, bool).
2) In the 2GB data structure, is there any reason I can't
have an archive database to move old data into to keep the
size of the working database down if necessary?
3) From a deployability standpoint, Access is a breeze.
Copy a file to wherever you want it and point to it. Is
there a way to build easy deployment of MSDE into our core
application so there is minimal effort (choosing a
directory or location, even if installed on a server) on
the part of the end user?
Thanks,
Dan
dan@.builtbydan.com
> 1) How common is it to hit the 2GB per database limit?
> That sounds very large to me and seems unlikely to be a
> problem except in extremely large applications. We are not
> storing pictures or documents, only standard data types
> (text, int, bool).
That depends on you workload and structure, its hard to say, but for a
normal application
it will fit your needs. It it wont, its no proble to upgrade your db to SQL
Server 2000, or to the SQL Server Express database which woont have this 2
GB Limit at all.

> 2) In the 2GB data structure, is there any reason I can't
> have an archive database to move old data into to keep the
> size of the working database down if necessary?
Yeah could be a choice, but i rather sould store all data in one database as
long as the size will allow.

> 3) From a deployability standpoint, Access is a breeze.
> Copy a file to wherever you want it and point to it. Is
> there a way to build easy deployment of MSDE into our core
> application so there is minimal effort (choosing a
> directory or location, even if installed on a server) on
> the part of the end user?
MSDE doenst cost that much CPU time that you cannot install it on a file
server or a workstation. So installing the MSDE on very other workstation,
detach /Attach the db file and switch over your application settings to the
new storage isnt a hassle.
HTH, Jens Smeyer.
http://www.sqlserver2005.de
"Dan Fontenot" <dan@.builtbydan.com> schrieb im Newsbeitrag
news:059301c545c6$fc6e10c0$a401280a@.phx.gbl...
> We are a small company writing primarily in-house
> software, but also potentially developing software to
> market.
> We are converting from writing all of our software
> completely in Access to writing in Visual Studio (C#). We
> are confident this the best decision for both our in-house
> and potentially marketable software.
> What is a bit of a question mark is whether it is best to
> go with Access/Jet databases (questionable since MS has
> deprecated the Jet engine and is not upgrading anymore) or
> MSDE. My feeling is that for future functionality and
> scalablity MSDE (SQL Express 2005 when it comes out of
> Beta and we upgrade to .NET 2.0) is a better choice.
> Here are my concerns:
> 1) How common is it to hit the 2GB per database limit?
> That sounds very large to me and seems unlikely to be a
> problem except in extremely large applications. We are not
> storing pictures or documents, only standard data types
> (text, int, bool).
> 2) In the 2GB data structure, is there any reason I can't
> have an archive database to move old data into to keep the
> size of the working database down if necessary?
> 3) From a deployability standpoint, Access is a breeze.
> Copy a file to wherever you want it and point to it. Is
> there a way to build easy deployment of MSDE into our core
> application so there is minimal effort (choosing a
> directory or location, even if installed on a server) on
> the part of the end user?
> Thanks,
> Dan
> dan@.builtbydan.com
|||hi Dan,
Dan Fontenot wrote:
> Here are my concerns:
> 1) How common is it to hit the 2GB per database limit?
> That sounds very large to me and seems unlikely to be a
> problem except in extremely large applications. We are not
> storing pictures or documents, only standard data types
> (text, int, bool).
personally I never hit that limit with "standard" data type, but this depend
on the customer's needs...

> 2) In the 2GB data structure, is there any reason I can't
> have an archive database to move old data into to keep the
> size of the working database down if necessary?
nope... you can go for that design implementig a sort of partitioning...
something called vertical partitioning like, where all columns relatated to
a particular object model are stored in one database and are not related to
objects in the other one..
but this has little to do with other performance tuning including
partitioned servers and federated database...

> 3) From a deployability standpoint, Access is a breeze.
> Copy a file to wherever you want it and point to it. Is
> there a way to build easy deployment of MSDE into our core
> application so there is minimal effort (choosing a
> directory or location, even if installed on a server) on
> the part of the end user?
yes, distributing JET database really is a breeze, but you can have simple
deployment scenarios like
http://msdn.microsoft.com/msdnmag/is...baseinstaller/ ,
and you can easily use the default SQL Server data path (..\Program
Files\Microsoft SQL Server\MSSQL\Data [for a default instance]) or specify
custom target prosition, both at MSDE installtime, using the DATADIR
parameter
(http://msdn.microsoft.com/library/de...stsql_84xl.asp),
valid for all databases, or at runtime, specifying specific path in the
CREATE DATABASE statement
(http://msdn.microsoft.com/library/de...reate_1up1.asp)...
usually the database folder should not be shared among "standard" Windows
users for security reasons, but you can actually put them wherever you want
as long as the Windows account running SQL Server service is able to reach
them...
in SQL Express scenario, you have further options, where you can take
advantage of a so called "Application XCopy"
http://msdn.microsoft.com/library/?u...seoverview.asp ,
partially available in SQL Server 2000 too via the sp_attach_db method
(http://msdn.microsoft.com/library/de...ae-az_52oy.asp)...
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.11.1 - DbaMgr ver 0.57.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

deceptively simple join / select question

Ok, I have two tables with a child/parent or one -> many relationship:

parent_table:
pid int primary key
pname varchar

child_table:
cid int primary key
pid int
cname varchar

Say the contents of these two tables are:

parent_table:
pid pname:
1 Ben
2 Jesse
3 Michael

child_table
pid cid cname
1 1 ben_Child1
1 2 ben_Child2
1 3 ben_Child3
2 4 jesse_Child1
2 5 jesse_Child2
2 6 jesse_Child3
3 7 michael_Child1
3 8 michael_Child2
3 9 michael_Child3

Now what I would like to be able to do is:

select pname, cname
from
parent table a,
child_table b
where a.pid = b.pid

Except! Instead of getting the results in the form of:

Ben ben_Child1
Ben ben_Child2
Ben ben_Child3
...

I would like them in

Ben ben_Child1 ben_Child2

Now normally this would be impossible (I think) since the query would return an unknown number of columns. But in this case I only care about the FIRST TWO children for each parent. So I'm sure there's some way to do this with a simple select, but I don't know how. Anyone?Are you trying to get everything from both tables in the form pname cname? If so what about a Cross Join?

Originally posted by blm14_cu
Ok, I have two tables with a child/parent or one -> many relationship:

parent_table:
pid int primary key
pname varchar

child_table:
cid int primary key
pid int
cname varchar

Say the contents of these two tables are:

parent_table:
pid pname:
1 Ben
2 Jesse
3 Michael

child_table
pid cid cname
1 1 ben_Child1
1 2 ben_Child2
1 3 ben_Child3
2 4 jesse_Child1
2 5 jesse_Child2
2 6 jesse_Child3
3 7 michael_Child1
3 8 michael_Child2
3 9 michael_Child3

Now what I would like to be able to do is:

select pname, cname
from
parent table a,
child_table b
where a.pid = b.pid

Except! Instead of getting the results in the form of:

Ben ben_Child1
Ben ben_Child2
Ben ben_Child3
...

I would like them in

Ben ben_Child1 ben_Child2

Now normally this would be impossible (I think) since the query would return an unknown number of columns. But in this case I only care about the FIRST TWO children for each parent. So I'm sure there's some way to do this with a simple select, but I don't know how. Anyone?|||Originally posted by JODonnell
Are you trying to get everything from both tables in the form pname cname? If so what about a Cross Join?

No, a cross join would give me EVERYTHING. I am trying to get a subset of the results but in addition I am trying to map two rows to two columns eg instead of:

pname1 cname1
pname1 cname2

I want:

pname1 cname1 cname2|||Originally posted by blm14_cu
No, a cross join would give me EVERYTHING. I am trying to get a subset of the results but in addition I am trying to map two rows to two columns eg instead of:

pname1 cname1
pname1 cname2

I want:

pname1 cname1 cname2

What about GROUP BY:

Select p.*,c.* From Ptable P Join Ctable C ON P.pid = C.pid Where [P.pid = C.pid] GROUP BY P.pid

Sorry for the mess but it's almost 5.

John|||Still no good. The group by wont help because I'm not doing any sums or avgs or counts or anything. Adding the group by wont change the results at all actually, from what I know.|||I was bored.

I think this is what you're looking for

Rgds,
Jim.

declare @.d_id int;
declare @.c_name varchar(100);
declare @.c_arr varchar(2000);
declare @.tmp varchar(100);

declare @.x table([id] int, [name] varchar(2000))

DECLARE d cursor for
select depid
from dept;

OPEN d
FETCH NEXT FROM d INTO @.d_id
WHILE @.@.FETCH_STATUS = 0
BEGIN
set @.tmp='';
set @.c_arr='';

DECLARE c CURSOR FOR
SELECT name
FROM emp
where deptid = @.d_id

OPEN c
FETCH next from c into @.c_name
while @.@.fetch_status = 0
BEGIN
print @.d_id
print @.c_name

set @.tmp = @.c_arr
set @.c_arr = @.c_name+','+@.tmp
fetch next from c into @.c_name
END
CLOSE c
DEALLOCATE c
if (len(@.c_arr)>1)
begin Insert @.x values(@.d_id, substring(@.c_arr,1,len(@.c_arr)-1))end

FETCH NEXT FROM d INTO @.d_id
END
CLOSE d
DEALLOCATE d

select id, name as name from @.x
GO|||You might check yesterday's thread (http://www.dbforums.com/t989683.html) on this topic.

-PatP

Decaying index?

We had an issue yesterday that we resolved but I would like some input as to
whether this is normal behavior for a database. I have had a SQL Server
backend on a web application that has been working flawlessly for about a
year. The app seemed to get a tiny bit slower each month, but it was no
worry in that no page ever took more than a second to return. One page was
taking about 10 seconds to return a full query, but it was still well within
a tolerable delay (and a filter was added to allow the user to speed it up).
Suddenly, yesterday the same query was taking about a minute to execute and
was causing the page to time out. We dug around and narrowed it down to one
table that has some 15,000 records and we started to look at the design of
the table. We noticed that the table had a single index, the primary key,
and it was created non-clustered.
We found that we couldn't duplicate the problem if we backed up the database
and restored it to a different server, but the problem would be the same if
we restored it to a different database on the same server. The indexes had
been rebuilt by the maintenance plan the night before, but rebuilding them
again didn't fix the problem, either. Then we set the index as clustered
and the problem vanished.
Is this a typical database maintenance thing? Did we goof when we setup the
index in the first place and a good DBA could have caught this right away?
Was this just some weird random chance happening? Is it likely that the
indexes were corrupted or might this have just been a fluke fix and the
problem could crop up again? I am just a little leary of installing this
application on other servers if I might get a call a year from now that
there is another problem like this and would like to know if it is
inevitable or if we can do something to ensure it doesn't happen again.Although you problem is strange... remember that SQL will use a nonclustered
index IF, the query is very selective ( perhaps < 5% of the records being
returned)... THis might account for SQL NOT using the nonclust, but using
the clust... it does NOT account for the same/different server you mention.
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cPL0b.168$f7.69628@.localhost...
> We had an issue yesterday that we resolved but I would like some input as
to
> whether this is normal behavior for a database. I have had a SQL Server
> backend on a web application that has been working flawlessly for about a
> year. The app seemed to get a tiny bit slower each month, but it was no
> worry in that no page ever took more than a second to return. One page
was
> taking about 10 seconds to return a full query, but it was still well
within
> a tolerable delay (and a filter was added to allow the user to speed it
up).
> Suddenly, yesterday the same query was taking about a minute to execute
and
> was causing the page to time out. We dug around and narrowed it down to
one
> table that has some 15,000 records and we started to look at the design of
> the table. We noticed that the table had a single index, the primary key,
> and it was created non-clustered.
> We found that we couldn't duplicate the problem if we backed up the
database
> and restored it to a different server, but the problem would be the same
if
> we restored it to a different database on the same server. The indexes
had
> been rebuilt by the maintenance plan the night before, but rebuilding them
> again didn't fix the problem, either. Then we set the index as clustered
> and the problem vanished.
> Is this a typical database maintenance thing? Did we goof when we setup
the
> index in the first place and a good DBA could have caught this right away?
> Was this just some weird random chance happening? Is it likely that the
> indexes were corrupted or might this have just been a fluke fix and the
> problem could crop up again? I am just a little leary of installing this
> application on other servers if I might get a call a year from now that
> there is another problem like this and would like to know if it is
> inevitable or if we can do something to ensure it doesn't happen again.
>|||>> Did we goof when we setup the index in the first place
and a good DBA could have caught this right away?
Nope - and it looks like you were correct for your data
distribution until recently.
You should look at the query plan. A clustered index has
the data page as the leaf level but a non-clustered index
needs an extra read to get the data if the index is not
covering. This means that a sometimes a table scan can be
more efficient than using an index (especially for a small
table like this) but a clustered index in the same
situation would be choosen.
If the index is covering then a non-clustered index would
probably be better.
It's difficult to predict the indexes the optimiser will
use (even when it gets it right - was a lot easier in the
good old days of 6.5) so you really have to look at the
query plans - a lot easier if all access is via stored
procs.
You have to keep monitoring systems and spot things like
this early.|||> We found that we couldn't duplicate the problem if we backed up the
database
> and restored it to a different server, but the problem would be the
same if
> we restored it to a different database on the same server.
Are both servers at the same SQL service pack level? Do they have the
same number of CPUs? I suggest you compare the execution plans in Query
Analyzer. I would expect you will find a difference to explain the
suboptimal performance. All things being equal, you should get
identical plans and performance. Execution plan caching that would
explain the difference with the original database but this shouldn't be
an issue with a newly restored database.
> Is this a typical database maintenance thing?
Nope. However, it is likely that the maintenance provided new
statistics for the optimizer and a different execution plan resulted.
> Did we goof when we setup the
> index in the first place and a good DBA could have caught this right
away?
If you have only one index on a table, it should be clustered unless you
have a specific reason to do otherwise. The SQL Server optinizer
behaves best when a table has a clustered index and you should choose a
table's clustered index thoughtfully. You are more likely to get a
sub-optimal execution plan when you don't have a clustered index and/or
inappropriate indexes.
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cPL0b.168$f7.69628@.localhost...
> We had an issue yesterday that we resolved but I would like some input
as to
> whether this is normal behavior for a database. I have had a SQL
Server
> backend on a web application that has been working flawlessly for
about a
> year. The app seemed to get a tiny bit slower each month, but it was
no
> worry in that no page ever took more than a second to return. One
page was
> taking about 10 seconds to return a full query, but it was still well
within
> a tolerable delay (and a filter was added to allow the user to speed
it up).
> Suddenly, yesterday the same query was taking about a minute to
execute and
> was causing the page to time out. We dug around and narrowed it down
to one
> table that has some 15,000 records and we started to look at the
design of
> the table. We noticed that the table had a single index, the primary
key,
> and it was created non-clustered.
> We found that we couldn't duplicate the problem if we backed up the
database
> and restored it to a different server, but the problem would be the
same if
> we restored it to a different database on the same server. The
indexes had
> been rebuilt by the maintenance plan the night before, but rebuilding
them
> again didn't fix the problem, either. Then we set the index as
clustered
> and the problem vanished.
> Is this a typical database maintenance thing? Did we goof when we
setup the
> index in the first place and a good DBA could have caught this right
away?
> Was this just some weird random chance happening? Is it likely that
the
> indexes were corrupted or might this have just been a fluke fix and
the
> problem could crop up again? I am just a little leary of installing
this
> application on other servers if I might get a call a year from now
that
> there is another problem like this and would like to know if it is
> inevitable or if we can do something to ensure it doesn't happen
again.
>|||It's very odd that restoring to a different server fails to show the problem
but restoring to a different database on the same server still has the
problem. That suggests it has nothing to do with a corrupt index. It
suggests it has more to do with the amount of memory on the machines, I/O
topologies, and interactions with other work going on in that instance of
SQL Server.
Your overall behavior would not be surprising if you are doing range scans
on the non-clustered index. Since there is no ordering to how the rows
themselves are stored in that index over time it is likely to become
increasingly random. For example, on an initial load, where you happened to
insert in sorted order of the primary key, the rows would be effectively
clustered together by primary key. Oversimplifying to make my point, if you
fit 20 rows per page and typically retrieved 15 rows in the query you'd
average ~1.5 I/Os to answer the query (the .5 covering the case where you
had to cross page boundaries). With inserts, updates, and deletes over time
though you'd lose the ordering. That same 15 row query would eventually
take ~15 I/Os. So you have an order of magnitude increase. Rebuilding the
non-clustered index doesn't help because the rows themselves aren't stored
as part of the index, they are in heap storage. With a clustered index you
don't have this problem. The rows are stored in the index. Essentially,
you'll stay at an average of ~1.5 I/Os indefinitely. The benefit of a
clustered index grows as the number of rows retrieved grows. In the
non-clustered index example 100 rows could lead to ~100 I/Os. In the
clustered index example, assuming a freshly rebuilt or defragmented index,
since SQL Server actually does 64K I/Os the entire 100 rows might come into
memory on a single I/O. Again allowing for some rows to be outside the
range, you stay at ~1.5 I/Os!
SQL Server has all kinds of algorithms to reduce I/O which I ignore in the
above example. I also ignored that there may be additional I/Os to retrieve
index nodes themselves. Severe fragmentation might cause my clustered index
example to grow slightly, but indexdefrag or rebuilding the index does fix
this. More memory implies better caching and more readahead, which is why
you might have seen a difference depending on which server you tried this
on. Spreading data over more drives (via files in filegroups not RAID) will
also increase readahead.
The key to non-clustered indices is to make they reduce the number of rows
that must actually be retrieved to an absolute minimum. The best case is
where the key is unique and you are doing exact key matches. The next best
case is where one or more indexes really reduce the number of rows to
retrieve rather dramatically. If you have a non-clustered index on
First-Name and ask the query "Select * from t1 where First-Name = 'John' and
Last-Name = 'Cohen' and City = 'New York'" your performance will be terrible
because you'll retrieve everyone with the first name John, then filter for
Cohen and New York. Things will be little better if your only NC index is
on Last-Name. It will be worse if it's on City. But if you have an index
on all three then the query processor will likely choose to intersect two or
three indexes and only retrieve the data rows that match all of them. In
fact, there is only one John Cohen in New York City (according to QwestDex)
so using all three indexes would get you down to a single row retrieval
whereas having only one of those indices would lead to perhaps thousands of
I/Os to answer this query.
So for your example you need to look at the data and the key queries and
rethink your index strategies. Perhaps a single clustered index is
sufficient. Adding a small number of non-clustered indices to that may
further improve performance and maintain it over time.
--
Hal Berenson, SQL Server MVP
True Mountain Group LLC
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cPL0b.168$f7.69628@.localhost...
> We had an issue yesterday that we resolved but I would like some input as
to
> whether this is normal behavior for a database. I have had a SQL Server
> backend on a web application that has been working flawlessly for about a
> year. The app seemed to get a tiny bit slower each month, but it was no
> worry in that no page ever took more than a second to return. One page
was
> taking about 10 seconds to return a full query, but it was still well
within
> a tolerable delay (and a filter was added to allow the user to speed it
up).
> Suddenly, yesterday the same query was taking about a minute to execute
and
> was causing the page to time out. We dug around and narrowed it down to
one
> table that has some 15,000 records and we started to look at the design of
> the table. We noticed that the table had a single index, the primary key,
> and it was created non-clustered.
> We found that we couldn't duplicate the problem if we backed up the
database
> and restored it to a different server, but the problem would be the same
if
> we restored it to a different database on the same server. The indexes
had
> been rebuilt by the maintenance plan the night before, but rebuilding them
> again didn't fix the problem, either. Then we set the index as clustered
> and the problem vanished.
> Is this a typical database maintenance thing? Did we goof when we setup
the
> index in the first place and a good DBA could have caught this right away?
> Was this just some weird random chance happening? Is it likely that the
> indexes were corrupted or might this have just been a fluke fix and the
> problem could crop up again? I am just a little leary of installing this
> application on other servers if I might get a call a year from now that
> there is another problem like this and would like to know if it is
> inevitable or if we can do something to ensure it doesn't happen again.
>|||Hi Stephan,
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cPL0b.168$f7.69628@.localhost...
>I have had a SQL Server
>backend on a web application that has been working flawlessly for about a
>year. The app seemed to get a tiny bit slower each month, but it was no
>worry in that no page ever took more than a second to return.
It's nothing to 'worry' about but definetely a thing to look into and
understand the reasons for this trend.
I guess you learned that lesson the hard way :-)
> We noticed that the table had a single index, the primary key,
> and it was created non-clustered.
If a table has no clustered index, it is a heap which means that there is no
logical order to the rows.
performance issues with heaps are basically physical defragmantation and
forwarding pointers for rows that had to be moved off their original pages.
This will naturally increase over time if not taken care of.
> The indexes had
> been rebuilt by the maintenance plan the night before, but rebuilding them
> again didn't fix the problem, either. Then we set the index as clustered
> and the problem vanished.
Rebuilding the non clustered index will not solve either problem mentioned
above since the table is a heap.
In a clustered index, the rows have to be logically ordered by the
clustering key so creating the clustered index solved both problems as it
made SQL Server rearrange all rows.
In order to get rid of the forwarding pointers and the physical
defragmantation of a heap, you can issue the DBCC SHRINKDATABASE or
SHRINKFILE and use the NOTRUNCATE option (Look in BOL for more details).
This will 'defragment' the pages in the data file, rearrange the rows and
get rid of all forwarding pointers.
Remember that clustered indexes also need maintenance. Those will be taken
care of by the agent's 'auto' maintenance plan index rebuild job.
> Is this a typical database maintenance thing?
Yes it is. performance will usually degrade due to defragmentation at
various levels, depending on the nature of the activity in your DB and your
index design.
> I am just a little leary of installing this
> application on other servers if I might get a call a year from now that
> there is another problem like this and would like to know if it is
> inevitable or if we can do something to ensure it doesn't happen again.
There is nothing to worry about. A proper maintenance strategy performed
regularly will keep your DB up and running in good shape for years and
years.
If you want to dig deeper into this issue, I recommend Kalen Delaney's
'Inside SQL Server 2000'.
It contains in depth information about index and data structures among other
topics.
HTH.|||I just reread your post and the other replies and realized I have missed the
'suddenly yesterday' issue :-)
If your table had not experienced any major updates yesterday, i guess my
suggestion is invalid in your case although it is still true and a good
practice...
HTH :-)
"Amy" <NOSPAM_l.a@.usa.com> wrote in message
news:Or0JHTzZDHA.3768@.tk2msftngp13.phx.gbl...
> Hi Stephan,
> "Stephen Brown" <nospam@.telusplanet.net> wrote in message
> news:cPL0b.168$f7.69628@.localhost...
> >I have had a SQL Server
> >backend on a web application that has been working flawlessly for about a
> >year. The app seemed to get a tiny bit slower each month, but it was no
> >worry in that no page ever took more than a second to return.
> It's nothing to 'worry' about but definetely a thing to look into and
> understand the reasons for this trend.
> I guess you learned that lesson the hard way :-)
> > We noticed that the table had a single index, the primary key,
> > and it was created non-clustered.
> If a table has no clustered index, it is a heap which means that there is
no
> logical order to the rows.
> performance issues with heaps are basically physical defragmantation and
> forwarding pointers for rows that had to be moved off their original
pages.
> This will naturally increase over time if not taken care of.
> > The indexes had
> > been rebuilt by the maintenance plan the night before, but rebuilding
them
> > again didn't fix the problem, either. Then we set the index as
clustered
> > and the problem vanished.
> Rebuilding the non clustered index will not solve either problem mentioned
> above since the table is a heap.
> In a clustered index, the rows have to be logically ordered by the
> clustering key so creating the clustered index solved both problems as it
> made SQL Server rearrange all rows.
> In order to get rid of the forwarding pointers and the physical
> defragmantation of a heap, you can issue the DBCC SHRINKDATABASE or
> SHRINKFILE and use the NOTRUNCATE option (Look in BOL for more details).
> This will 'defragment' the pages in the data file, rearrange the rows and
> get rid of all forwarding pointers.
> Remember that clustered indexes also need maintenance. Those will be taken
> care of by the agent's 'auto' maintenance plan index rebuild job.
> > Is this a typical database maintenance thing?
> Yes it is. performance will usually degrade due to defragmentation at
> various levels, depending on the nature of the activity in your DB and
your
> index design.
> > I am just a little leary of installing this
> > application on other servers if I might get a call a year from now that
> > there is another problem like this and would like to know if it is
> > inevitable or if we can do something to ensure it doesn't happen again.
> There is nothing to worry about. A proper maintenance strategy performed
> regularly will keep your DB up and running in good shape for years and
> years.
> If you want to dig deeper into this issue, I recommend Kalen Delaney's
> 'Inside SQL Server 2000'.
> It contains in depth information about index and data structures among
other
> topics.
> HTH.
>

Decare @A

Hi all! I'm trying to set the value of @.d to a value from col1 in
tblTest. I have tried the following code but it didn't work.
DECLARE @.d VARCHAR(25)
SELECT * FROM tblTest
SET @.d = tblTest.col1
Someone please help me! Thanks!SELECT @.d = col1 FROM tblTest
or
SET @.d = SELECT col1 FROM tblTest
Where col1 is THE COLUMN NAME not the first column.
--
Nik Marshall-Blank MCSD/MCDBA
"Stanley" <xstanley@.gmail.com> wrote in message
news:1128081085.771682.230680@.f14g2000cwb.googlegroups.com...
> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>|||DECLARE @.d VARCHAR(25)
SELECT @.d = col1 FROM tblTest
http://sqlservercode.blogspot.com/
"Stanley" wrote:
> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>|||Thanks so much, that's what I really need!!

Decare @A

Hi all! I'm trying to set the value of @.d to a value from col1 in
tblTest. I have tried the following code but it didn't work.
DECLARE @.d VARCHAR(25)
SELECT * FROM tblTest
SET @.d = tblTest.col1
Someone please help me! Thanks!
SELECT @.d = col1 FROM tblTest
or
SET @.d = SELECT col1 FROM tblTest
Where col1 is THE COLUMN NAME not the first column.
Nik Marshall-Blank MCSD/MCDBA
"Stanley" <xstanley@.gmail.com> wrote in message
news:1128081085.771682.230680@.f14g2000cwb.googlegr oups.com...
> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>
|||DECLARE @.d VARCHAR(25)
SELECT @.d = col1 FROM tblTest
http://sqlservercode.blogspot.com/
"Stanley" wrote:

> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>
|||Thanks so much, that's what I really need!!

Decare @A

Hi all! I'm trying to set the value of @.d to a value from col1 in
tblTest. I have tried the following code but it didn't work.
DECLARE @.d VARCHAR(25)
SELECT * FROM tblTest
SET @.d = tblTest.col1
Someone please help me! Thanks!SELECT @.d = col1 FROM tblTest
or
SET @.d = SELECT col1 FROM tblTest
Where col1 is THE COLUMN NAME not the first column.
Nik Marshall-Blank MCSD/MCDBA
"Stanley" <xstanley@.gmail.com> wrote in message
news:1128081085.771682.230680@.f14g2000cwb.googlegroups.com...
> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>|||DECLARE @.d VARCHAR(25)
SELECT @.d = col1 FROM tblTest
http://sqlservercode.blogspot.com/
"Stanley" wrote:

> Hi all! I'm trying to set the value of @.d to a value from col1 in
> tblTest. I have tried the following code but it didn't work.
> DECLARE @.d VARCHAR(25)
> SELECT * FROM tblTest
> SET @.d = tblTest.col1
> Someone please help me! Thanks!
>|||Thanks so much, that's what I really need!!

dec to varchar

Hello,
I need some help with string functions.
I have these statements:
declare @.text varchar(30), @.quantity dec(32,12), @.date datetime
select @.text = 'qwerty', @.quantity = -123.123456, @.date = getdate()
select @.text + ';' + convert(varchar, @.quantity) + ';' + convert(char(10),
@.date, 121) as string
They give this result:
string
---
qwerty;-123.123456000000;2005-04-12
@.quantity is shown as -123.123456000000. Is there an easy way to get rid of
those tailing zeros?
EskoSorry, that does not help me.
My quantities can have any number of decimal digits between 0 and 12 and I
always want to get rid of all tailing zeros.
So 1.000000000000 should be shown as "1", -33.123456789010 as
"-33.12345678901", -0.112233440000 as "-0.11223344" and so on.
"mark baekdal" wrote:

> can you do this?
> declare @.text varchar(30), @.quantity dec(32,6), @.date datetime
> select @.text = 'qwerty', @.quantity = -123.123456, @.date = getdate()
> select @.text + ';' + convert(varchar, @.quantity) + ';' + convert(char(10)
,
> @.date, 121) as string
>|||or maybe this...
declare @.text varchar(30), @.quantity dec(32,12), @.date datetime
select @.text = 'qwerty', @.quantity = -123.123456, @.date = getdate()
select @.text + ';' + replace(rtrim(replace(convert(varchar,@.q
uantity),'0','
')),' ','0') + ';' + convert(char(10),
@.date, 121) as string
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"Esko" wrote:

> Hello,
> I need some help with string functions.
> I have these statements:
> declare @.text varchar(30), @.quantity dec(32,12), @.date datetime
> select @.text = 'qwerty', @.quantity = -123.123456, @.date = getdate()
> select @.text + ';' + convert(varchar, @.quantity) + ';' + convert(char(10)
,
> @.date, 121) as string
> They give this result:
> string
> ---
> qwerty;-123.123456000000;2005-04-12
> @.quantity is shown as -123.123456000000. Is there an easy way to get rid
of
> those tailing zeros?
> Esko
>|||the next query should, just watch the formatting as it changed in the post.
The replace function, replaces '0' with a single space ' ' and then trims th
e
result and then fills any gaps with '0', so this (as far as I've tested)
always works?
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"Esko" wrote:
> Sorry, that does not help me.
> My quantities can have any number of decimal digits between 0 and 12 and I
> always want to get rid of all tailing zeros.
> So 1.000000000000 should be shown as "1", -33.123456789010 as
> "-33.12345678901", -0.112233440000 as "-0.11223344" and so on.
>
> "mark baekdal" wrote:
>

DEC TO HEX CONVERSION WHILE IMPORTING

I HAVE DECIMAL NUMBER IN MY TEST TABLE COLUMN CARD_NUMBER
AS FOLLOWS
2571549730.0
2571549826.0
2571550034.0
2571550210.0
2571550306.0
2571551378.0
2571551586.0
2571551682.0
2571551762.0
2571551874.0
I WANT TO CREATE A TEMP TABLE AND MOVE THIS COLUMN BUT
BEFORE MOVING IT SHOULD CONVERT INTO HEXADECIMAL NUMBER
WHILE INSERTING INTO TEMP TABLE
THANKS
Mustafa,
If your decimal values fit into a bigint, you could write a
user-defined function (SQL Server 2000 required):
create function dec2hex (
@.decimal bigint
) returns varbinary(20) as begin
declare @.b varbinary(20)
set @.b = 0x
while @.decimal > 0 begin
set @.b = cast(cast(@.decimal%256 as tinyint) as binary(1)) + @.b
set @.decimal = @.decimal/256
end
return @.b
end
go
select dbo.dec2hex(1)
select dbo.dec2hex(2571551874.0)
go
drop function dec2hex
-- Steve Kass
-- Drew University
-- Ref: 77AF63E1-A927-40A1-A7B0-6EA31F1C80F2
MUSTAFA wrote:

>I HAVE DECIMAL NUMBER IN MY TEST TABLE COLUMN CARD_NUMBER
>AS FOLLOWS
>2571549730.0
>2571549826.0
>2571550034.0
>2571550210.0
>2571550306.0
>2571551378.0
>2571551586.0
>2571551682.0
>2571551762.0
>2571551874.0
>I WANT TO CREATE A TEMP TABLE AND MOVE THIS COLUMN BUT
>BEFORE MOVING IT SHOULD CONVERT INTO HEXADECIMAL NUMBER
>WHILE INSERTING INTO TEMP TABLE
>
>THANKS
>
|||YES IT WORKS
THANKS Mr. STEVE

>--Original Message--
>Mustafa,
> If your decimal values fit into a bigint, you could
write a
>user-defined function (SQL Server 2000 required):
>create function dec2hex (
> @.decimal bigint
>) returns varbinary(20) as begin
> declare @.b varbinary(20)
> set @.b = 0x
> while @.decimal > 0 begin
> set @.b = cast(cast(@.decimal%256 as tinyint) as binary
(1)) + @.b[vbcol=seagreen]
> set @.decimal = @.decimal/256
> end
> return @.b
>end
>go
>select dbo.dec2hex(1)
>select dbo.dec2hex(2571551874.0)
>go
>drop function dec2hex
>-- Steve Kass
>-- Drew University
>-- Ref: 77AF63E1-A927-40A1-A7B0-6EA31F1C80F2
>MUSTAFA wrote:
CARD_NUMBER
>.
>
|||Dear Mr. Steve
i have created user define function dec2hex in sql server
2000 and when i access it using sql server query analyser
it give me desired result i.e it convert decimal into
hexadecimal number
The query is as follow
select dbo.dec2hex(card_number) as card_number from
tbltest
But when i used it using VB6 recordset object it show ?
rather then showing hexadecimal number
thanks

>--Original Message--
>Mustafa,
> If your decimal values fit into a bigint, you could
write a
>user-defined function (SQL Server 2000 required):
>create function dec2hex (
> @.decimal bigint
>) returns varbinary(20) as begin
> declare @.b varbinary(20)
> set @.b = 0x
> while @.decimal > 0 begin
> set @.b = cast(cast(@.decimal%256 as tinyint) as binary
(1)) + @.b[vbcol=seagreen]
> set @.decimal = @.decimal/256
> end
> return @.b
>end
>go
>select dbo.dec2hex(1)
>select dbo.dec2hex(2571551874.0)
>go
>drop function dec2hex
>-- Steve Kass
>-- Drew University
>-- Ref: 77AF63E1-A927-40A1-A7B0-6EA31F1C80F2
>MUSTAFA wrote:
CARD_NUMBER
>.
>
|||Mustafa,
This function returns a varbinary(20) value, so if you are seeing ?
somewhere, then something in your application is not displaying
varbinary values correctly. I don't know how VB6 recordsets display
information of varbinary type. Can your VB recordset object display
what is in this recordset?
select 0x123456AB as aBinaryValue
It sounds like a VB issue, not a SQL Server issue.
Steve Kass
Drew University
mustafa wrote:
[vbcol=seagreen]
>Dear Mr. Steve
>i have created user define function dec2hex in sql server
>2000 and when i access it using sql server query analyser
>it give me desired result i.e it convert decimal into
>hexadecimal number
>The query is as follow
>select dbo.dec2hex(card_number) as card_number from
>tbltest
>But when i used it using VB6 recordset object it show ?
>rather then showing hexadecimal number
>thanks
>
>
>write a
>
>(1)) + @.b
>
>CARD_NUMBER
>

DEC ODBC

Hi
I've got a DTS package that connects to a database on a DEC Vax, but I don't
have the ODBC driver. Anyone know where I can get a DEC ODBC Driver?
Thanks
JohnIt depends on what database. The following link lists ODBC
driver vendors:
http://www.sqlsummit.com/ODBCVend.htm
-Sue
On Thu, 11 Mar 2004 08:11:09 -0800, John Bandettini
<anonymous@.discussions.microsoft.com> wrote:

>Hi
>I've got a DTS package that connects to a database on a DEC Vax, but I don'
t have the ODBC driver. Anyone know where I can get a DEC ODBC Driver?
>Thanks
>John

Debugging?

Hi,
Is there a way to debug an SQL Server stored procedure? I call a number of sp's from my .net app and need a way to step through them.
Thank you,http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsdebug/html/vxtskasqldebuggingexample.asp
|||Thanks. Your link also pointed to the following link:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsdebug/html/vchowapplicationdebugging.asp
that shows, supposedly, how to debug a stored procedure inline with your app. This is what I'm looking for, because I need to be able to see what the stored procedure is receiving in it's parameters. I followed the steps in the link above, put a breakpoing in the stored procedure and another one in my app just before the stored procedure, and, contrary to the claims of the article, I was not able to step into the procedure. Any help getting this to work would be appreciated!
Thanks.|||Is the SQL Server on your local machine? If not you have toinstall the remote debugging components. This article tells youhow to install and configure remote debugging (as well as various othercool debugging tips):
http://www.dbazine.com/sql/sql-articles/cook1
|||It is on my local machine.

Debugging Woes

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

mike.groh wrote:

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

Mike,

No ideas I'm afraid no. I can only say that the use of script components/tasks can be ...errrr... flaky.

I usually try to recreate the script task and hope the problem goes away.

I know that doesn't help much but I wanted you to know that I feel your pain.

-Jamie

|||

Thanks for the confirmation that the Script Task may need re-creating. I was hoping it was something that you'd run into and had a nice little fix for. What a strange one! Flakey, to be sure!

If I detect any kind of pattern, such as actions that preceed this problem, I'll be sure to let you know. I'm thinking I'm doing something wrong surely, if this was a bug in SSIS or the VSA editor, other people would be complaining about it.

Thanks!

- Mike

|||I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.|||Can you not just delete and re-create the script task, or is it at a package, or project level that you loose debug support?|||

Steven Barden wrote:

I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.

Open up a Connect ticket.

https://connect.microsoft.com/SQLServer/Feedback|||

I am getting this error as well. I'm finding that I get the error message conditionally, depending on what line in my code I set the breakpoint on. If I set it on my "Try" line, then I DO NOT get the error. Once the debug window comes up, I can then set breakpoints where I really want to stop the code.

It's not ideal, but it is working for me.

If anyone has since found a resolution, I would greatly appreciate a response.

Thanks.

Mitch

|||I am having the same problem. is this still not fixed over a year later!!? Has anyone found out what the issue is. Just downloaded and installed sp1 for Team Suite as I have VS2005 Teamsuite installed, and I still can't debug my SSIS packages. If I put any break points in the Script code when I try to debug I get the message.

"SQL Server Integration Services Script Task has encountered a problem and needs to close. We are sorry for the inconvenience."

I click on close it then comes up with the "Microsoft Visual Studio has lost its link to..." error and it runs the package but doesn't stop at any break point. If I take out all break points it will run without this error. A couple of times I have managed to put in a single break point early in the package within a different script component which causes it to break... but at a completely different point within a completely different script component. It's so weird. Any help would be appreciated.

Hoots.

Debugging Woes

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

mike.groh wrote:

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

Mike,

No ideas I'm afraid no. I can only say that the use of script components/tasks can be ...errrr... flaky.

I usually try to recreate the script task and hope the problem goes away.

I know that doesn't help much but I wanted you to know that I feel your pain.

-Jamie

|||

Thanks for the confirmation that the Script Task may need re-creating. I was hoping it was something that you'd run into and had a nice little fix for. What a strange one! Flakey, to be sure!

If I detect any kind of pattern, such as actions that preceed this problem, I'll be sure to let you know. I'm thinking I'm doing something wrong surely, if this was a bug in SSIS or the VSA editor, other people would be complaining about it.

Thanks!

- Mike

|||I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.|||Can you not just delete and re-create the script task, or is it at a package, or project level that you loose debug support?|||

Steven Barden wrote:

I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.

Open up a Connect ticket.

https://connect.microsoft.com/SQLServer/Feedback|||

I am getting this error as well. I'm finding that I get the error message conditionally, depending on what line in my code I set the breakpoint on. If I set it on my "Try" line, then I DO NOT get the error. Once the debug window comes up, I can then set breakpoints where I really want to stop the code.

It's not ideal, but it is working for me.

If anyone has since found a resolution, I would greatly appreciate a response.

Thanks.

Mitch

|||I am having the same problem. is this still not fixed over a year later!!? Has anyone found out what the issue is. Just downloaded and installed sp1 for Team Suite as I have VS2005 Teamsuite installed, and I still can't debug my SSIS packages. If I put any break points in the Script code when I try to debug I get the message.

"SQL Server Integration Services Script Task has encountered a problem and needs to close. We are sorry for the inconvenience."

I click on close it then comes up with the "Microsoft Visual Studio has lost its link to..." error and it runs the package but doesn't stop at any break point. If I take out all break points it will run without this error. A couple of times I have managed to put in a single break point early in the package within a different script component which causes it to break... but at a completely different point within a completely different script component. It's so weird. Any help would be appreciated.

Hoots.
|||

I am running into this problem as well. Is Microsoft listening? It has been over 1 year since it was first posted.

I think this thread shows that enough people have been experiencing this issue. Isn't this time to pay attention to your paying customers?

By the way, dropping and re-creating script task doesn't help. I guess the only work around is to completely re-create the package, which is terrible and so problem-prone.

Thank you!

Debugging Woes

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

mike.groh wrote:

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

Mike,

No ideas I'm afraid no. I can only say that the use of script components/tasks can be ...errrr... flaky.

I usually try to recreate the script task and hope the problem goes away.

I know that doesn't help much but I wanted you to know that I feel your pain.

-Jamie

|||

Thanks for the confirmation that the Script Task may need re-creating. I was hoping it was something that you'd run into and had a nice little fix for. What a strange one! Flakey, to be sure!

If I detect any kind of pattern, such as actions that preceed this problem, I'll be sure to let you know. I'm thinking I'm doing something wrong surely, if this was a bug in SSIS or the VSA editor, other people would be complaining about it.

Thanks!

- Mike

|||I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.|||Can you not just delete and re-create the script task, or is it at a package, or project level that you loose debug support?|||

Steven Barden wrote:

I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.

Open up a Connect ticket.

https://connect.microsoft.com/SQLServer/Feedback|||

I am getting this error as well. I'm finding that I get the error message conditionally, depending on what line in my code I set the breakpoint on. If I set it on my "Try" line, then I DO NOT get the error. Once the debug window comes up, I can then set breakpoints where I really want to stop the code.

It's not ideal, but it is working for me.

If anyone has since found a resolution, I would greatly appreciate a response.

Thanks.

Mitch

|||I am having the same problem. is this still not fixed over a year later!!? Has anyone found out what the issue is. Just downloaded and installed sp1 for Team Suite as I have VS2005 Teamsuite installed, and I still can't debug my SSIS packages. If I put any break points in the Script code when I try to debug I get the message.

"SQL Server Integration Services Script Task has encountered a problem and needs to close. We are sorry for the inconvenience."

I click on close it then comes up with the "Microsoft Visual Studio has lost its link to..." error and it runs the package but doesn't stop at any break point. If I take out all break points it will run without this error. A couple of times I have managed to put in a single break point early in the package within a different script component which causes it to break... but at a completely different point within a completely different script component. It's so weird. Any help would be appreciated.

Hoots.
|||

I am running into this problem as well. Is Microsoft listening? It has been over 1 year since it was first posted.

I think this thread shows that enough people have been experiencing this issue. Isn't this time to pay attention to your paying customers?

By the way, dropping and re-creating script task doesn't help. I guess the only work around is to completely re-create the package, which is terrible and so problem-prone.

Thank you!

|||

Hi Michael,

The original issue that started this thread is a hard to reproduce problem in the VSA framework our script task and pipeline component are relying on for designing and executing scripts.

The latest issue with not hitting breakpoints is caused by a bug in our product that in some scenarios is unable to locate the original breakpoint. This only occurs when you have multiple scripts in the package. To workaround this you can try and set a breakpoint in the first script task that gets executed. When the VSA designer loads up all the script tasks in the package and hits that breakpoint you can add the other breakpoints in the other scripts. Unfortunately this is the only reliable workaround if you are unable to set breakpoints for now.

Also make sure you have SQL Server 2005 SP2 installed because there are some changes in the way scripts execute done in that service pack.

Hope this helps,

Silviu Guea [MSFT]

Debugging Woes

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

mike.groh wrote:

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

Mike,

No ideas I'm afraid no. I can only say that the use of script components/tasks can be ...errrr... flaky.

I usually try to recreate the script task and hope the problem goes away.

I know that doesn't help much but I wanted you to know that I feel your pain.

-Jamie

|||

Thanks for the confirmation that the Script Task may need re-creating. I was hoping it was something that you'd run into and had a nice little fix for. What a strange one! Flakey, to be sure!

If I detect any kind of pattern, such as actions that preceed this problem, I'll be sure to let you know. I'm thinking I'm doing something wrong surely, if this was a bug in SSIS or the VSA editor, other people would be complaining about it.

Thanks!

- Mike

|||I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.|||Can you not just delete and re-create the script task, or is it at a package, or project level that you loose debug support?|||

Steven Barden wrote:

I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.

Open up a Connect ticket.

https://connect.microsoft.com/SQLServer/Feedback|||

I am getting this error as well. I'm finding that I get the error message conditionally, depending on what line in my code I set the breakpoint on. If I set it on my "Try" line, then I DO NOT get the error. Once the debug window comes up, I can then set breakpoints where I really want to stop the code.

It's not ideal, but it is working for me.

If anyone has since found a resolution, I would greatly appreciate a response.

Thanks.

Mitch

|||I am having the same problem. is this still not fixed over a year later!!? Has anyone found out what the issue is. Just downloaded and installed sp1 for Team Suite as I have VS2005 Teamsuite installed, and I still can't debug my SSIS packages. If I put any break points in the Script code when I try to debug I get the message.

"SQL Server Integration Services Script Task has encountered a problem and needs to close. We are sorry for the inconvenience."

I click on close it then comes up with the "Microsoft Visual Studio has lost its link to..." error and it runs the package but doesn't stop at any break point. If I take out all break points it will run without this error. A couple of times I have managed to put in a single break point early in the package within a different script component which causes it to break... but at a completely different point within a completely different script component. It's so weird. Any help would be appreciated.

Hoots.

Debugging Woes

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

mike.groh wrote:

The following issue is happening a lot these days. For all the world it looks like a bug in BIDS:

I set a breakpoint in a Script Task (not a Script Component!) and the code dutifully stops on the breakpoint over and over again, as expected. (Please don't tell me about not being able to debug Script Components!)

At some point, when I run the package and hit the Script Task, the VSA editor opens as if it's going to take me to the breakpoint, but then it displays an error dialog box containing the following message:

Microsoft Visual Studio has lost its link to .
You work will be exported to C:\Documents and Settings\mgroh\My Documents when you quit the application

(Notice the space between "its link to" and the period in the first statement. It goes without saying that nothing is exported to My Documents.)

When I dismiss the dialog, execution resumes and the break point is ignored. In fact, breakpoints in all script tasks in the project are ignored.

Clearly, something is broken in the DTSX file, and is preventing VSA from finding the breakpoint. That's the only interpretation I can come up with for the "lost its link to" part of the message.

I have tried everything I can think of:
- Deleting all breakpoints and re-establishing the breakpoint
- Making a small change to the code to force VSA to re-evaluate the module
- Shutting down and re-opening the project
- Rebooting the computer (!!!!)

And NOTHING works. The module is just broken, and breakpoints it other Script Tasks don't work, either.

This really looks like a bug in SSIS, but I can't find anyone else who's complained of the same thing.

Any ideas? TIA!

- Mike

Mike,

No ideas I'm afraid no. I can only say that the use of script components/tasks can be ...errrr... flaky.

I usually try to recreate the script task and hope the problem goes away.

I know that doesn't help much but I wanted you to know that I feel your pain.

-Jamie

|||

Thanks for the confirmation that the Script Task may need re-creating. I was hoping it was something that you'd run into and had a nice little fix for. What a strange one! Flakey, to be sure!

If I detect any kind of pattern, such as actions that preceed this problem, I'll be sure to let you know. I'm thinking I'm doing something wrong surely, if this was a bug in SSIS or the VSA editor, other people would be complaining about it.

Thanks!

- Mike

|||I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.|||Can you not just delete and re-create the script task, or is it at a package, or project level that you loose debug support?|||

Steven Barden wrote:

I would sincearly appreciate it if someone with MSFT could assist with this bug. I am having this issue as well and it is very frustration to say the least. You spend a significant amount of time building a package, testing each step, then BAM! it's broken in the way described above and the only thing I can do is to start over, copy and paste? This is bad... I have to create a new project, create all new variables (making sure I don't accidentaly scope one in the wrong place) and so forth. Any more help or any ideas how to avoid this will be appreciated.

Open up a Connect ticket.

https://connect.microsoft.com/SQLServer/Feedback|||

I am getting this error as well. I'm finding that I get the error message conditionally, depending on what line in my code I set the breakpoint on. If I set it on my "Try" line, then I DO NOT get the error. Once the debug window comes up, I can then set breakpoints where I really want to stop the code.

It's not ideal, but it is working for me.

If anyone has since found a resolution, I would greatly appreciate a response.

Thanks.

Mitch

|||I am having the same problem. is this still not fixed over a year later!!? Has anyone found out what the issue is. Just downloaded and installed sp1 for Team Suite as I have VS2005 Teamsuite installed, and I still can't debug my SSIS packages. If I put any break points in the Script code when I try to debug I get the message.

"SQL Server Integration Services Script Task has encountered a problem and needs to close. We are sorry for the inconvenience."

I click on close it then comes up with the "Microsoft Visual Studio has lost its link to..." error and it runs the package but doesn't stop at any break point. If I take out all break points it will run without this error. A couple of times I have managed to put in a single break point early in the package within a different script component which causes it to break... but at a completely different point within a completely different script component. It's so weird. Any help would be appreciated.

Hoots.