Category name:SQL Server

Getting rid of : WCF-Custom" raised an error message. Details "System.Data.SqlClient.SqlException: Timeout expired

December 22, 2010 / Comments Off on Getting rid of : WCF-Custom" raised an error message. Details "System.Data.SqlClient.SqlException: Timeout expired

I had a nice setup in my BizTalk environment. I had 4 receive locations polling for data (in the same table) and it all boiled down to execute a stored procedure with different parameters. I used the WCF adapter with SQL bindings for that.
For some obscure reason, I would get timeouts in the eventlog. Below is a screenshot of the receive port setup I had. Only one is shown but I had four of them.

Some important things to notice:

  • PolledDataAvailablestatement, is not used cause Pollwhiledatatfound = false. But you still have to put stuff in there to make things work.
  • Execute a specific sproc, with the number of records you want to receive as a parameter.
  • UseAmbientTransaction, no, no,no, had too many problems in the past when using msdtc and receive locations (locks and stuff) so this is a NO.
  • ReceiveTimeout is set to 10 minutes. This is a bit long but i never touched that value, it’s the default.

To make things a little bit more clear, The stored procedure I used looks looked like this :

 CREATE PROCEDURE [dbo].[Get_TeVertalen_Berichten]
(
 @MaximumAantal INT = NULL
)
AS
BEGIN
 EXEC [dbo].[Get_Berichten_Internal]
  @MaximumAantalInternal = @MaximumAantal,
  @OudeBerichtStatus = ‘TeVertalen’,
  @NieuweBerichtStatus = ‘VertalenGestart’,
  @Richting  = ‘Ingaand’  
END

And here is the code of Get_Berichten_Internal:

CREATE PROCEDURE [dbo].[Get_Berichten_Internal]
(
 @MaximumAantalInternal INT = 15,
 @OudeBerichtStatus  VARCHAR(200),
 @NieuweBerichtStatus VARCHAR(200),
 @Richting    VARCHAR(20) 
)
AS
BEGIN
 BEGIN TRY
  — Declare variables
  DECLARE @AFFECTED_KEYS TABLE
  (
   BerichtID NUMERIC(18, 0)
  )
  DECLARE @AFFECTED_ROWS INT
   — Some sets needed
  SET NOCOUNT ON
  — Perform the update statement and capture all the keys
  UPDATE
   TOP (@MaximumAantalInternal) dbo.tb_Bericht
  SET  
   BerichtStatus = @NieuweBerichtStatus
  OUTPUT
   INSERTED.BerichtID
  INTO
   @AFFECTED_KEYS
  WHERE
   BerichtStatus = @OudeBerichtStatus
  AND
   ExternBerichtTypeID IS NOT NULL
  AND
   InternBerichtTypeID IS NOT NULL
  AND
   Richting = @Richting
  — DO STUFF WITH THE AFFECTED KEYS
  <– SNIP–>
 END TRY
END

Now everything looks good and in the query analyzer everything was working blazingly fast. But I started to see the following message in the eventlog every 10 minutes (hmm that’s the receive timeout).  And sometimes several (four to be exactly) of them within a very short period of time.

Event Type: Warning
Event Source: BizTalk Server 2006
Event Category: BizTalk Server 2006
Event ID: 5740
Date:  14-12-2010
Time:  10:47:07
User:  N/A
Computer: BA34T
Description:
The adapter “WCF-Custom” raised an error message. Details “System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
   at Microsoft.ServiceModel.Channels.Common.Design.AdapterAsyncResult.End()
   at Microsoft.ServiceModel.Channels.Common.Channels.AdapterInputChannel.EndTryReceive(IAsyncResult result, Message& message)
   at System.ServiceModel.Dispatcher.InputChannelBinder.EndTryReceive(IAsyncResult result, RequestContext& requestContext)
   at System.ServiceModel.Dispatcher.ErrorHandlingReceiver.EndTryReceive(IAsyncResult result, RequestContext& requestContext)”.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

And once this message started to appear, stuff went downhill from there on. I could see in the profiler that stored procedures were taking ages, and even simple updates of a single record would take thirty seconds or so. Lock-Time-outs and deadlocks were occuring on a very regular basis. Even though the receive locations werent receiving any data. So nothing was received and I still got may daily portion of errors/warnings/suspended stuff in BizTalk because of this.
So basically it looked like I was having some kind of locking problem even if no work was done by the second stored procedure. But the second procedure did an update (even when there was nothing to update) and maybe that triggered the nasty behaviour.

I started a discusiion on MSDN to see if anybody could help : <<See here>>

 But eventually i cghanged something to make this behaviour go away. The idea was only to perform the update when there was stuff to update. So i changed the first procedure to :

CREATE PROCEDURE [dbo].[Get_TeVertalen_Berichten]
(
   @MaximumAantal INT = NULL
)
AS
BEGIN
 SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
 DECLARE @Aantal int
 SELECT
  @Aantal=count(*)
 FROM
  dbo.tb_Bericht WITH (NOLOCK)
 WHERE
  BerichtStatus = ‘TeVertalen’
 AND
  ExternBerichtTypeID IS NOT NULL
 AND
  InternBerichtTypeID IS NOT NULL
 AND
  Richting = ‘Ingaand’
 IF @Aantal > 0
 BEGIN
  EXEC [dbo].[Get_Berichten_Internal]
    @MaximumAantalInternal = @MaximumAantal,
    @OudeBerichtStatus  = ‘TeVertalen’,
    @NieuweBerichtStatus = ‘VertalenGestart’,
    @Richting    = ‘Ingaand’  
 END 
END

 And by first checking if there was any work to do and only then do an update, I got rid of the deadlocks and the errors in the eventlog.
I really don’t know why this solved it, but my guess is there is some bug in the WCF adapters, that go wrong if you start a transaction but return no data.

Everything is running smooth now and I hope this post will help somebody experiencing the same problems.

Interesting stuff coming with the BizTalk Adaper Pack

August 15, 2008 / Comments Off on Interesting stuff coming with the BizTalk Adaper Pack

I really can’t wait for the BizTalk Adapter pack. Finally we get a good SQL adapter capable of almost everything. I had concurrency problems in the past cause the Adapter always has a transaction level Serializable. Now with the new SQL Adapter the transaction level is settable. Have a read of the SQL LOB adapter capabillities >> here <<

They also threw in some goodies in the Adapter pack, The various Adapters will now show up as native BizTalk Adapters, have a look >> here <<

The truth about Dynamic SQL and Stored Procedures….

January 7, 2008 / Comments Off on The truth about Dynamic SQL and Stored Procedures….

Well I am not going to say what’s better or what I prefer, but I can only give my view of things.



What everybody seems to forget is that an external person with very good understanding of SQL (commonly know as a DBA) can solve performance problems when stored procedures are used.
And I totally agree that Stored Procedures are overkill for most of the the simple CRUD functionality. So I am not going to start a debate about that.



But just one question for all those who are against stored procedures.



What if the generated dynamic SQL performs bad. If a SPROC was used a SQL DBA could have tuned the query.
And by tuning I don’t mean adding some indexes, but maybe cursor’s / memory tables / views / CLR etc. Just to get the results in the fastest possible way.



How would you do this with dynamic SQL ??

Inserts with guids have become faster… (in SQL2005 that is)

Ok,

I am not a real fan of the GUID datatyoe in SQL server cause some people will create clustered indexes on them and then start to insert thousands of records, Next they will start complaining SQL isn’t fast at all.
(Wich is about the same as putting a Caravan behind a Formula 1 car and then complain it won’t do 300MPH on the straight…….)

But there is something new now in SQL 2005.

Have a read and see that insert performance has increased. -> http://www.lazydba.com/sql/1__18914.html

Since he is using NewSequetialID insert performance will be ok if it’s a clustered index…..

This post is related to the old discussion about –> GUID is not always good ( a true Read the &*%^&%^& Manual story)

Things you should know about UpdateGrams and the SQL Adapter !!

December 1, 2005 / Comments Off on Things you should know about UpdateGrams and the SQL Adapter !!

For lotsa people this won’t be too interesting but if you are a BizTalker and have regular interaction with a SQL Server keep on reading.

First I wish to thank Richard Seroter who had an excellent post about the SQL adapter.(see here). (Don’t know why i havent seen his blog before this guy is posting interesting stuff). Via Richards Blog you automatically also end up at Stephen Kaufman’s Blog, cause he has an excellent article about the at-identity.

Whenever I used the SQL adapter I hated it that the SQL adapter would generate an Orchestration and a Schema. Cause whenever I used the SQL Adapter Schema Wizard I  don’t want the port in a NEW orchestration but in my CURRENT orchestration. This is where Richards Excellent article comes in (see here) . Just define a Project called ‘SQL Something Base’, and define all your SQL generated schemas in that project make the ports of the orchestrations public and presto. Use it wherever you like. I really like the idea. Also whenever a table should change you dont have to go over the process of copying the Multipart Message and Ports.

The Second part of his post talks about the possibillity to make a Update / Insert / Delete for a table with an UpdateGram. But when i tried this it wouldn’t work as expected. So i messed around a littlebit and i think i have found the answer to a good Working Update / Insert / Delete. Lots of it is copied from Richards original idea but still here we go. If you don’t know updategrams read this first.

I have a table in SQL, and the table looks like this :

CREATE TABLE [dbo].[Customers] (
   [CustomerID] [int] IDENTITY (1, 1) NOT NULL ,
   
 [CompanyName] [nvarchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [ContactName] [nvarchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [ContactTitle] [nvarchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Address] [nvarchar] (60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [City] [nvarchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Region] [nvarchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [PostalCode] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Country] [nvarchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Phone] [nvarchar] (24) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Fax] [nvarchar] (24) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]

So when I run the SQL Adapter schema Wizard i get a UpdateGram looking like this. ( well not exactly, note the field updg:id)

So what about the updg:id. Well have a read of this article about the use of id in an updategram. Then you can use the trick as described by Stephen Kaufman to create the id Attribute. In short the updg:id is used to tell SQL wich before block belongs to wich after blok. And this is where Richard Seroters article was wrong. It used the key of the record for the ID. But that makes it impossible to update the same record twice in the same run. So I followed all of his steps but my map looks a littlebit different. have a look at it.

I use the iteration to generate a unique key for me. So now I can update the same record in one document. The second page looks like this :

notice the ID field is not maopped because it is a identity.

I know I had to steal and borrow a lot, and i didn’t invented it myself. I wanted to tell you about my findings and hope you like it.

 

 

Why Mygeneration Beats the Competition when it comes to codegeneration

April 12, 2005 / Comments Off on Why Mygeneration Beats the Competition when it comes to codegeneration

The main reason why MyGeneration is an excellent tool (Besides support of the dOOdad architecture) is the ability to add several templates to a project. Below is an example of this.

In this project there are four templates needed. Adding a template is very easy. Jus right click on the Rootnode and add a template instance. You can also just execute the selected template so all code within that template is generated.

Then when the template is added you can set the properties for that template.

First you select an output directory, this is where the generated code will be saved. Then you select a database and the tables you need in your project. In this case I needed all tables (that’s why they are blue). Then you save the project. This input is saved as well so you can generate over and over again without any problems. If a table is added, just select that table as wel in the parameters.

Now all that you need is to select the rootnode and hit the green play button on the project explorer. You should get a screen like the one below.

The only thing left to do is add the resulting code to your project and you are done !. I always generate the code directly into my Visual Studio Project directories. So you get a popup from Visual Studio things have changed, hit the OK button and compile.

Now whenever there is a schema change in the database, just implement the change, hit the generate button and be done with those changes.

GUID is not Always GOOD !!!!!!! (a true RTFM story)

A collegue of mine asked me some questions about a clustered index. He was preparing for a SQL Exam and he had trouble understaning some concepts. He had trouble with the concepts not because they were too difficult for him, but they were completely opposite to what he experienced in everyday programming life !!!!


To make a long story short.


A lot of programmers have a GUID as a Primary key.


In the Books online that come with SQL Server there is the following statement :


PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a nonclustered index is not specified when you create the PRIMARY KEY constraint


So if there is no other index on a table, the primary key will be clustered. (so far nothing wrong) But if you have a GUID as a PK then let’s have a look at what the books online say about the UniqueIdentifier………


The uniqueidentifier data type has several disadvantages:



  1. The values are long and obscure. This makes them difficult for users to type correctly, and more difficult for users to remember.

  2. The values are random and cannot accept any patterns that may make them more meaningful to users.

  3. There is no way to determine the sequence in which uniqueidentifier values were generated. They are not suited for existing applications that depend on incrementing key values serially.

  4. At 16 bytes, the uniqueidentifier data type is relatively large compared to other data types such as 4-byte integers. This means indexes built using uniqueidentifier keys may be relatively slower than implementing the indexes using an int key. Consider using the IDENTITY property when global uniqueness is not necessary, or when having a serially incrementing key is desirable.

Besides That. There is also another Disadvantage on having a GUID as the primary key. The data type is relatively large……(also from the books online)


Wide keys , The key values from the clustered index are used by all nonclustered indexes as lookup keys and therefore are stored in each nonclustered index leaf entry.


So every record refers to it’s position in the table with the GUID. Since a guid is 16 bytes and an ID is 4 the index with the GUID is much larger and will there therefore be slower…..


Hope this is usefull for someone……



Patrick



  • Recent Posts
  • Recent Comments
  • Archives
  • Categories
  • Meta