

In most cases you will never hit this one, the server is usually very responsive to attention packets because these are handled very low in the network layer. This one is a special case where we use a 5 second timeout. You can also send this packet by calling SqlCommand.Cancel on an asynchronous SqlCommand object. When we send the attention packet, we have to wait for the attention acknowledgement from the server and this can in theory take a long time and time out. This tells the server to stop executing the current command. Remaining Seconds (TotalSeconds 60) (The is the modulo operator in T-SQL, which returns the remainder when dividing two integers.) Thus, we can write our SQL like this to return 3 integer columns (Hours, Minutes, Seconds.
#Sql server timeslice driver#
When this happens the SqlClient driver sends a special 8 byte packet to the server called an attention packet. If we have a TotalSeconds, we can get: Hours (TotalSeconds / 3600) Remaining Minutes (TotalSeconds 3600) / 60. Suppose you execute a command, then the command times out. Update: According to this list, SQL timeouts happen when waiting for attention acknowledgement from server: Run a performance profile on the queries, prehaps some well-placed indexes could speed it up, or rewriting the query could too. Queries should not take that long either, MSSQL can handle large databases, I've worked with GB's of data on it before. I'm not quite sure that queries keep on running after the client connection has closed. On the server level, use MSSQLMS to view the server properties, and on the Connections page you can specify the default query timeout. You can specify the connection timeout within the SQL connection string, when you connect to the database, like so: "Data Source=localhost Initial Catalog=database Connect Timeout=15" So it entirely depends, but I'd start with "what can I compromise? what can I change?" and go from there. Sometimes it takes extra coding to optimize it, sometimes it takes extra money to get you the secondary read-only DB, sometimes it needs time and attention in index tuning. It entirely depends on what you are doing, but there is always a solution. If you are mated to some reporting tool and such and can't control the SELECT it generates, you could point it to a view and then do the safety valve in the view.Īlso, if up-to-the-minute freshness isn't critical and you could compromise on that, like monthly sales data, then compiling a physical table of complex joins by job to avoid complex joins might do the trick - that way everything would be sub-second per query. That said, if you are at the liberty to change the code, you could guesstimate if the query is too heavy and you could either reject or return only X rows in your stored procedure. A well-designed SQL system won't be taken down by DDoS - that'd be like a car that dies if you step on the gas. Basically you give the DMZ sql server that heavy read can go to without killing stuff. This has to be solved on SQL server side, by the way of read-only replica, transaction log shipping (to give you a read-only server to connect to), replication and such.

It sounds like more of an architectual issue, and any timeout/disconnect you can do would be more or less a band-aid.
