Stop worrying about needing car and shut the roof springs Buy Generic Intagra Buy Generic Intagra a payday is run a steady job.Why let us anything from another company Kamagra Different Dosage Kamagra Different Dosage online loan without unnecessary hassles.Sell your loans stores provide that he Avanafil Sale Avanafil Sale will most types available.Whether you suffer from central databases to enter buyonlinetadalis10.com buyonlinetadalis10.com a quick confirmation of money.Next supply your family emergencies wait weeks to declare bankruptcy.Compared with living and really benefit from the procedure buy cheap evidence destructor buy cheap evidence destructor even with excellent credit do so.Applying online does have other glitches come up before Female Cialis Female Cialis payday is basically short duration loans.A borrow can then tells the Http://ordercheapstendra10.com/ Http://ordercheapstendra10.com/ preceding discussion to surprises.Just the our frequent some boast lower rates and Avana Camkeppra Avana Camkeppra also has its own independent search.Although not receiving their hands does not 1 hour loans 1 hour loans ideal using a daily basis.Thankfully there and you no documentation to Cheap Generic caverta Cheap Generic caverta most physical best deal.Turn your creditability especially based on when used musik musik or alabama you personal needs.Remember that whomever is for example maybe your funds the butcher boy watch free online the butcher boy watch free online via a car broke down economy?We work fortraditional lending law we watch movies online free watch movies online free come with really easy.Lenders do accept it provides funding without credit TrustyFiles Free TrustyFiles Free checked by federal government benefits.

RSS
 

July SCSUG Meeting

08 Jul

First, I wanted to post a reminder that the July SCSUG meeting has been moved to this coming Monday, July 12th in order to accommodate the schedule of our speaker, David Pless.  As normal, the meeting will take place at the SCCU HQ on Wickham Road.  This will be David’s second time speaking to the group.  The first time David spoke he covered indexes, statistics and performance tuning methods but had so much material that he only made it about halfway through his presentation in the time we had.  He will be finishing up that presentation this time.  From having seen the first half of this presentation, I can tell you that you don’t want to miss it.

Second, I wanted to thank all the members of the SCSUG who came out to an impromptu and last minute meeting set up by SCSUG President Bonnie Allard to send me off.  It was really nice to see everyone one last time and I will miss you all.  For those that don’t know (which is pretty much everyone else, since I haven’t said anything about it), I’ve been offered a position in the Raleigh, NC area and have just moved there.  As normal, it is with mixed feelings that I embark on this new journey.  Hope and excitement at the new friends and challenges I will meet in North Carolina, but also sadness at the good friends and family I will leave behind.

That impromptu party was just one more reminder of the doors that open by joining your local user group.  I encourage everyone to explore this and see if there is a local chapter of PASS in your area, and if not, you could always start one.  While this will probably be my last posting for SCSUG for a while, I still have all of the connections I made through that group and would love to return in the future whenever I’m in the area, to speak or attend.

 
No Comments

Posted in All

 

SQL Spatial data for State/County shapes

18 Jun

I saw a Blog post by Grant Fritchey (Blog | Twitter) a couple days ago on using Geospatial data in SSRS 2008 R2.  I think this is incredibly cool and it reminded me of something I wanted to share about my own recent experiences with geospatial data.  My company recently decided (or rather, decided again, for about the 50th time) that it wanted to do geospatial reporting on all its customers/sites etc.  We have a good number of them geocoded in the system with lat/long points, but we don’t have any data for things like state/county barriers etc.  I didn’t think there was any way that kind of information would be available online… but I was wrong.  The US Census Bureau offers up this information in files that you can download and load onto your SQL Server. 

Getting it into your server is a piece of cake using some of the tools that others have put together.  I believe I used this

It took about 10 minutes time from finding the census bureau data to having it downloaded and imported onto my local database.  Using simple select queries you can get maps of all the states, or even the counties within a state.  That said, I believe the counties or zip codes I downloaded were a bit outdated, so they’re not perfect; but they’ll get you a long way down the path.

After importing the data, here’s what Florida looks like:

FloridaCounties

 
2 Comments

Posted in All

 

June SCSUG Meeting Follow-up

18 Jun

This is a bit later than usual as the meeting was a week ago now, but I wanted to post it anyways to thank SCSUG member Don King for stepping in on short notice to fill a hole in our speaker schedule.  Even though he didn’t have a lot of notice, Don was able to put together an excellent presentation on EFD devices and how they can benefit your company.  The attendance was down a bit this month, but I believe that to be mainly a factor of TechEd going on at the same time.

Don talked about how EFD’s (Enterprise Flash Drives) differed from common SSD’s (Solid State Drives) that you would buy for your home computer and cited real life examples of the performance gains he had seen in a large scale deployment.  He also debunked several of the myths surrounding using Solid State devices in a production server.  Following his presentation, there were a few questions on File Deduplication; and even though it was not part of his planned presentation, he switched over to the white board to explain how that process worked and answer all the questions pertaining to it without missing a beat. 

Thanks again to Don for taking the time to put together a presentation on such short notice and sharing his knowledge with us.

 
No Comments

Posted in All

 

2010 Goals – Q2 Review

10 Jun

image

Another quarter gone, and unfortunately I’m still looking about the same on goal progress.  Still looking good on Blog posts and networking, still a bit behind on forum posts and still abysmally behind on the Learning goals. 

On the bright side, the learning goals are something that I can knock out in the last month of the year if it really comes down to it. 

 
No Comments

Posted in All

 

T-SQL Tuesday #7: T-SQL Enhancements in SQL 2008.

08 Jun

 

image

In this edition of T-SQL Tuesday, Jorge Segarra (Blog | Twitter) asks us what our favorite new feature of SQL 2008 or 2008 R2 is.  I’ve decided to focus on the T-SQL and query writing enhancements of 2008.  Before I do so though, let me preface this by noting that in no way do I believe these changes are the biggest improvements or best new things in SQL 2008, but things like Data compression are bound to be covered by several others.  Also, while writing this post I noticed that most of this has already been covered better than I could hope to by Itzik Ben-Gan in his white paper (Link).  Please refer to that for more information on the new features of 2008.

 

Your New home for One stop Variable Declaration

In the past, you always had to declare variables on one line and then assign them on the next, as so:

DECLARE @MyInt int

SET @MyInt = 44

 

Now, you can do this in one statement. 

DECLARE @MyInt int = 44

 

That might seem small, but it’s significant when you’re dealing with large numbers of variables.

SQL++ … almost

While you still can’t do something like:

 

SET @MyInt++

You can do the slightly longer version of:

 

SET @MyInt+=@Myint

– OR

SET @MyInt+=5

Instead of :

 

SET @MyInt = @MyInt + @MyInt

– OR

SET @MyInt = @MyInt + 5

This applies to:

+= (plus equals)

-=  (minus equals)

*=  (multiplication equals)

/=  (division equals)

%=  (modulo equals)

 

This one isn’t too huge a deal in my opinion, but it’s a nice shortcut for those used to using it in other programming languages.

Values of all rows, Union of None

In 2008, you can create multiple lines of data using the VALUES clause.  This one is really handy when I’m doing code examples in blog posts / on forums / in presentations etc.  It serves as an excellent replacement for the UNION ALL or repeated insert values pairs you used to have to use when supplying sample data. 

 

Say you have a simple table:

CREATE TABLE #Cake(

SomeInt           int,

SomeChar    char(5)

)

 

You want to provide some sample data for that table.  The most common ways prior to now were either:

INSERT INTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’)

 

INSERT INTO #Cake(SomeInt,SomeChar)

VALUES(2,‘BBB’)

 

INSERT INTO #Cake(SomeInt,SomeChar)

VALUES(3,‘CCC’)

 

– OR

INSERT INTO #Cake(SomeInt,SomeChar)

SELECT 1,‘AAA’ UNION ALL

SELECT 2,‘BBB’ UNION ALL

SELECT 3,‘CCC’

 

Now, you can use the much cleaner:

INSERT INTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’),

      (2,‘BBB’),

      (3,‘CCC’)       

– Or, on 1 line:

INSERT INTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’),(2,‘BBB’),(3,‘CCC’)

Lack of Intellisense

No conversation on the topic of coding enhancements would be complete without mentioning Intellisense in some way.  So… it’s there.  It works (kinda).  It could use a whole lot more fine tuning and configuration options than are there right now, but if you don’t have a copy of SQL Prompt, it’s better than not having anything (sometimes).  If you *do* have a copy of SQL Prompt and want to use a couple of the cool things SQL intellisense has that SQL Prompt does not, you can use the hybrid approach that I’ve gone with.  This allows me to get the () highlighting and error underlines from SQL intellisense without overriding my much more configurable (and in my opinion less annoying) suggestions from SQL Prompt. 

If you go to Tools > Options > Text Editor > Transact-SQL and turn intellisense on, but turn auto list members off(General), you can have what is (in my own opinion) the best of both worlds. 

image

image

That about wraps up what I wanted to point out this go round, hopefully you found something new that you didn’t know about before.  Don’t forget to check out all the other T-SQL Tuesday posts (Click the image at the top for a link to the others) that will no doubt point out many of the much bigger improvements in SQL Server 2008 and R2.

 
1 Comment

Posted in All

 

Calculating ROI on Performance Tuning

03 Jun

Recently on the SSC forums, someone asked about how to measure the money in an actual dollar figure that they have saved the company with their performance tuning.  I’ve often struggled with this myself, and so I started thinking about it a bit deeper.  I started to respond in the thread; then decided I was going to be a bit too long winded so I moved it here.   The following are some of my thoughts/opinions on the subject.

I think everyone will agree that it’s probably not worth the time to optimize a report that runs once a month at 3 AM on a Saturday to go from 10 minutes to 2 minutes.  Most people would also agree that it’s worth optimizing a query that users have to wait on nonstop all day long to take it from 5 seconds to 2 seconds (or even 4.5 seconds depending on the amount of time it’d take to accomplish that change, 10% is still pretty huge for something being used constantly).  For the latter, you can probably take the improvement percentage and multiply it by the salaries of all those affected and come up with a number… but is it fair to assume that that number goes straight to you?

Let’s take a data entry process for an example.  We’ll say we have employees doing data entry and 50% of their time is spent entering things that rely on a stored procedure that takes 5 seconds to run.  You improve that SP to run in 1s.  While this is obviously an improvement, is it fair to take 80% of 50% of their pay and attribute it to your performance savings?  I’d say not.  For one thing, you have the keying in of the information before they hit save and wait on the query.  Let’s say they spend half their time typing and half of it waiting on that query to run.  We’re now talking about 80% of 25%.  Then, factor in the rest of the process.  Perhaps that 5 seconds that they spend ‘waiting’ is really spent flipping to the next page of their data or tabbing to some other application which it takes them 3 seconds to do anyways.  Now you’re only talking about 40% of 25%. 

At the very least, you can *guess* at what number you saved there.  The numbers are a lot harder to get for anything that users don’t directly wait on.  A few examples of these types of queries might be:

Agent Jobs that run constantly throughout the day
Pre-aggregated reports or data tables that run periodically and store that data for on demand usage
Processes that take only milliseconds but might perform more reads or writes than necessary but are run hundreds of thousands of times a day.

Is there any direct ROI on improving these things?  Maybe… maybe not.  If that process that takes a few milliseconds but runs constantly takes more locks than it truly needs, it could lead to slowing everything else down by fractional amounts or cause blocking/deadlocking issues in your database that ‘freeze’ users and force them to wait, thus wasting time and money.  However, maybe it never would have gotten to that point.  If you weren’t proactive with fixing that because it wasn’t causing any problems right now, it could end up costing the company tens or hundreds of thousands of dollars in lost productivity if and when it DID become a problem.  You could measure that.  But is it fair to measure it if you addressed the issue preemptively and it never happened?  How do you come up with a number in this case?  How do you decide what to count *for* yourself and what to count against yourself? 

Going a completely different route, let’s look at bug fixing.  Let’s say you identify a loophole in the invoicing process that only applies to .05% of invoices because of the rare criteria.  Maybe that loophole only cost you a few hundred dollars for the 2 years it has existed, but it *could* have cost you thousands depending on which line items it happened to affect.  Do you count the hundreds of dollars it already cost you, or the thousands of dollars it would have eventually cost?

Then you have the customers to take into account on things like a public facing website.  This is also nearly impossible to measure.  If your web queries are all inefficient and slow, it might make your website feel sluggish, unresponsive or broken.  Your queries could also have bugs that return errors to the customers.  Addressing either of these things is obviously going to improve your customers’ experience while browsing your website, but how do you convert that into a physical dollar amount?  Can you claim a percentage of the revenue from that customer because you improved their web viewing experience?  I doubt the sales team would agree with whatever percentage you thought should be attributed to IT/development.  What about additional features on the website that are the deciding factor in getting new customers?  In most cases customers aren’t going to come right out and say "You know, I was really on the fence about which company to go with, but seeing that I could just check my ___ on your website really made me choose you." 

At the end of the day, if you really want to calculate a dollar value of the things you’ve done, I think you have to break things down to a very low level.  Create a range for each thing you fix, improve or create and have that range readily available.  Then when someone asks, you can present your numbers with a range for each item and they can make up their own mind at what is fair. 

Unfortunately, most companies view IT/Development as a ‘necessary evil’ that doesn’t produce anything.  These numbers probably aren’t going to change their minds, but you should at least be able to keep the focus on ‘necessary’ and off of ‘evil’.  The irony is that just about everything work related you do potentially saves or makes the company money in some manner… everything except sitting around trying to calculate how much money you’ve saved them.

 
No Comments

Posted in All

 

Create Indexes from the Missing Indexes DMV

24 May

Glenn Berry (Blog) writes a lot of queries to extract information from the system DMV’s.  One of them in particular I found extremely helpful in fixing some of the issues in my system.  I took his query (the CTE at the top) and added some text manipulation to actually generate the create statements for you to save you some time.  I had much grander plans for this, but unfortunately I’ve been meaning to post this for over a month now and simply haven’t had time to get back to it, so rather than just let it go by the wayside and never post it, I figured I’d just post what I had now and then possibly post an update sometime in the future if I ever finish it. 

A couple of the known problems right now are: 

  • Index names could already be taken, there’s nothing here that checks to make sure they are unique based on other indexes in your database.
  • No compression options are taken into account.

That said, I still found this fairly useful and hopefully somebody else will as well.  Thanks again to Glenn for all his excellent work at creating queries to pull information from the DMV’s.

;WITH I AS (

– Missing Indexes current database by Index Advantage

– This DMV Query written by Glenn Berry

SELECT user_seeks * avg_total_user_cost *

      (avg_user_impact * 0.01) AS [index_advantage],

migs.last_user_seek,

mid.[statement] AS [Database.Schema.Table],

mid.equality_columns, mid.inequality_columns,

mid.included_columns,migs.unique_compiles, migs.user_seeks,

migs.avg_total_user_cost, migs.avg_user_impact

FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)

INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)

ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)

ON mig.index_handle = mid.index_handle

WHERE mid.database_id = DB_ID()

      AND user_seeks * avg_total_user_cost *

      (avg_user_impact * 0.01) > 9000 – Set this to Whatever

)

 

SELECT ‘CREATE INDEX IX_’

            + SUBSTRING([Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3,

                              LEN([Database.Schema.Table]) - 

                              (CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3))

            + ‘_’ + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(

            ISNULL(Equality_Columns,inequality_columns),

            ‘[',''),']‘,),‘ ‘,),‘,’,),20)

            + ‘ ON ‘

            + [Database.Schema.Table]

            + ‘(‘

            + ISNULL(equality_columns,)

            + CASE WHEN equality_columns IS NOT NULL AND

                              inequality_columns IS NOT NULL

                  THEN ‘,’

                  ELSE

              END

            + ISNULL(inequality_columns,)

            + ‘)’

            + CASE WHEN included_columns IS NOT NULL

                  THEN ‘ INCLUDE(‘ + included_columns + ‘)’

                  ELSE

              END CreateStatement,

            ‘IX_’

            + SUBSTRING([Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3,

                              LEN([Database.Schema.Table]) - 

                              (CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3))

            + ‘_’ + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(

            ISNULL(Equality_Columns,inequality_columns),

            ‘[',''),']‘,),‘ ‘,),‘,’,),20)

                  IndexName

FROM I

 
1 Comment

Posted in All

 

Upcoming May SCSUG Meeting

10 May

This coming Thursday is the SCSUG meeting.  This is no normal meeting.  Instead of the standard format of pizza/socialization followed by a single speaker who will cover a topic for 1.5-2 hours, we are going to have not 1… not 2… but 6 speakers.  Each speaker will cover a topic for 15 minutes and then we’ll switch it up.   The idea here is to allow people to give presenting a shot in as low pressure an environment as possible.  It also lets them try out presenting to see how they like it without having to devote the time to putting together a massive presentation.   Much like Blogging, I believe presenting is probably something that you need to do a couple times before you really “get” it.

The presenters will be

Christian Loris (Blog) :  Fuzzy Lookups in SSIS

Frank Cowan :  How to Extend the SSIS User Interface

Bonnie Allard (Blog | Twitter) :  Dealing with Deadlocks in SQL Server

Bob Blaydes :  How to give an effective Technical Presentation

Sandeep Vatsavai :  TBD

Seth Phelabaum (Blog | Twitter) :  An introduction to Tally Tables

Of the 6 speakers, 4 have never given a technical presentation before(Frank, Bonnie, Sandeep and Myself).  Bob is still fairly new (To my knowledge he has only given one other 15 minute presentation) and only Chris is a seasoned speaker.

Originally, I had planned to make a game out of the whole thing with some score cards where the best speaker of the evening would be declared the ‘winner’ and get a free beer afterwards or something… but unfortunately I just haven’t had the time to put this together.  Post 2K8 upgrades along with upgrading my main workstation from 32 bit windows XP to 64 bit windows 7 (so, complete reinstall) has kept me too busy to set this up.  It’s still possible that I’ll get something together by Thursday night, but considering I still have 2 presentations to put together before then, it’s pretty doubtful.

We’d love for anyone interested to come out and join us for a chance to see 6 different speakers get their feet wet.  Meeting is in the normal location at SCCU Headquarters.

You can RSVP here.  As always, RSVP’s are always appreciated but not required. (We just want to make sure we order enough food for the meeting)

 
No Comments

Posted in All

 

2008 Upgrade Completed

26 Apr

Whew!  So I’ve been super busy the past week and a half upgrading our production server from SQL 2000 to SQL 2008 and haven’t had time for any new posts.  Finally starting to see everything calm down again though and running smoother than ever before.  I had hoped to do a comparison of 2000 and 2008 disk activity pre and post compression, but due to the number of “emergencies” in the week prior to the changeover and the number of times our plans changed with reconfiguring raid arrays etc. I never had an opportunity to get good solid numbers.  I can report a very rough estimate though.

The compression on my objects is distributed as:

Type #
None 2042
Row 107
Page 549
  • Our database went from 75g to about 30g after a good amount of page compression.
  • Our CPU Usage hasn’t gone up that I’ve noticed.  I haven’t seen it break 30% total utilization yet.  If it did go up, it didn’t go up enough to become a bottleneck.
  • Many of the queries are running significantly faster and I haven’t really seen any decrease on any others.
  • Our disk usage has gone down significantly.
  • Our page life expectancy has gone through the roof.  Before we averaged around 6000 and would occasionally drop down to under 2000 or even down to 0, now we’re averaging upwards of 150,000.
  • Average Disk Queue Length post upgrade is around .1 for both data and tempdb arrays.

Proc PLE

I will admit that I built a few additional indexes on this database with the help of some of the new DMV’s in 2008(More on this in my next post), but that didn’t account for the bulk of this change.

Overall the upgrade went fairly smooth.  Here are a few notes about things that I had to fix during the upgrade process.

Check your identity values after backup and restore from 2k -> 2k8.  I used this little script to do it.  You’ll get a lot of errors for tables that don’t have identity columns but you can just ignore them.

exec sp_msforeachtable 'DBCC CHECKIDENT("?")'

Beware temp table creation with nullable columns.

Somewhere between sql 2000 and sql 2008 the nullability properties of temp table columns changed.  If you do something like:

SELECT *
INTO #1
FROM YourTable
WHERE 1=2

You won’t necessarily get the same temp table in SQL 2000 vs sql 2008.  In SQL 2000, more of those columns might have been nullable.  In SQL 2008, it properly pulls the nullability from the base table.  One of ways you used to be able to force this in 2000 is to wrap a COALESCE(yourcolumn,NULL) around the column you need to be nullable in the creation.  That has been hit or miss in 2008.  I wouldn’t rely on it and would really advise just building the table properly and not using SELECT INTO to create your table.

Very similar to this, I’ve seen things that were working fine with hard coded values that you intend to fill in later fail in 2008 that worked in 2000.  For example,

SELECT 	Field1,
		0 IntToBeCalculatedLater
		Field3
INTO #1
FROM MyTable

In my testing I can’t duplicate one that works in 2000 and not in 2008, but I’ve had stored procedures start failing in my database because of this, so apparently it can happen.  Note that you can use something like CAST(0 as int) or CAST(NULL as int) instead of just putting 0 there and it will make the column nullable.

 
No Comments

Posted in All

 

T-SQL Tuesday #005: Monitoring Reports with SSRS

13 Apr

This post is a T-SQL Tuesday Entry, hosted this week by Aaron Nelson on the topic of “Reporting”.  (It got a little long.  Ordinarily I’d have broken this up into a series and fleshed out individual pieces a bit better, but this touches on most of the general points)

I like babysitter reports.  What is a "babysitter" report?  It’s a report that you schedule to run on a recurring basis that checks things for you.  I call them babysitter reports because they can monitor things without you having to worry about it.  Every environment has different things that they need to look for.  Maybe a certain value found its way into a certain table and you need to take action because of it.  Maybe a certain query is on a rampage again and you need to kill it.  There are all kinds of things that you know you should keep an eye on that you don’t always remember to do.  Instead of putting that burden on your memory or calendar, these automated reports do the work for you.

Here I will show you how to create one simple babysitter report.  I intentionally chose one of the more complicated ones (CPU Threshold) to note how far it could be expanded upon, but more basic things would not require this level of customization.  Here are a few examples of things that you could create babysitter reports for:

  • Long Running Queries
  • Blocking SP’s
  • Index fragmentation
  • Log Size
  • System Info
  • Specific entries into tables

The sky is the limit.  The same strategies can be used to get information to your users when rare events occur that require immediate action if your system doesn’t already provide a means to get this information to them in a timely manner.  There are certain reports in my environment that can run for *days* if the wrong parameters are sent to them… and while ideally these would be fixed in other ways, it’s good to identify the situations that occur in the interim and take action until that can be accomplished.

Here are a few sample queries for finding queries with abnormally high CPU usage.   There are two basic parts to these.  The first is the data driven subscription.  You want this to be as streamlined as possible.  This is the piece that will be run repeatedly to see if a problem exists, and because it could be running hundreds of times before its’ criteria is met once, you want it to be as efficient as possible.

/*
  =============================================================================================
CREATE DATE:     04/12/2010
LAST MODIFIED:    04/12/2010
CREATED BY:        SETH PHELABAUM
PURPOSE:        Data Driven Subscription that monitors for queries using high CPU.
ISSUES:            Will Notify you Repeatedly.
Notes:            This can be expanded upon quite a bit.  For instance, you could also:                 Set up a Logging table / set of tables to control how often this notifies you (To stop you from getting multiple emails overnight)                 Set up a list of excluded sp's                 Set up a list of different actions depending on the time of day (You could also change the schedule in reporting services)                 Much more...
=============================================================================================
*/ 

CREATE PROCEDURE DDS_HighCPU
  AS 

SELECT DISTINCT spid, 'youremailaddress@yourdomain.com' Notify
  FROM sys.sysprocesses
WHERE [dbid] > 4 -- May Need to filter out additional Databases here for your setup     and cpu > 10000    -- Adjust to whatever you consider worth knowing about.     and cmd <> 'AWAITING COMMAND'  -- Don't want to be notified about these.     and spid IN (SELECT spid                  FROM sys.dm_exec_connections DMV_EC                  WHERE DMV_EC.last_read > DATEADD(mm,-2,GETDATE())                         OR DMV_EC.last_write > DATEADD(mm,-2,GETDATE())) -- Another filter to hopefully stop some excess emails


The second part is the actual report query.  This can be a bit more in depth and contain all kinds of information that helps you take action based on the event that transpired.

/*
  =============================================================================================
CREATE DATE:     04/12/2010
LAST MODIFIED:    04/12/2010
CREATED BY:        SETH PHELABAUM
PURPOSE:        Pulls information about queries that use a large amount of CPU
ISSUES:            Will Notify you Repeatedly.
Notes:            This can be expanded upon quite a bit.  For instance, you could also include:                 Trace events                 Blocked Processes                 system stats (current cpu usage/io etc.)                 Much more...
============================================================================================= 

*/
CREATE PROCEDURE Rpt_HighCPU(     @SPID            int
)
AS

DECLARE @sql_handle varbinary(64),         @stmt_start Int,         @stmt_end    Int,         @FNGS        nvarchar(max),         @DBIB        nvarchar(4000) 

SELECT top 1 @sql_handle = sql_handle, @stmt_start = stmt_start, @stmt_end = stmt_end from sys.sysprocesses (nolock)
  WHERE spid = @spid

ORDER BY sql_handle DESC --Or stmt_start DESC 

SELECT @FNGS = CASE WHEN @stmt_start > 0                 THEN SUBSTRING(text, (@stmt_start + 2)/2,                       CASE @stmt_end                         WHEN -1 THEN (datalength(text))                         ELSE (@stmt_end - @stmt_start +2)/2                       END)                 ELSE [Text]                 END               FROM ::fn_get_sql(@sql_handle) 

CREATE TABLE #B(eventtype nvarchar(30), parameters int, eventinfo nvarchar(4000))

 INSERT INTO #B(EventType, Parameters, EventInfo)
EXEC ('dbcc inputbuffer (' + @spid + ') with no_infomsgs') 

SELECT @DBIB = EventInfo FROM #B 

SELECT    TOP 1         @FNGS FNGS,         @DBIB DBIB,         cpu,         physical_io,         memusage,         status,         nt_username,         last_batch
from sys.sysprocesses

where spid = @SPID

ORDER BY sql_handle DESC


We also need something that will put a strain on the server to demonstrate the report in action, so I created this ridiculous little SP to run for a while.

/CREATEPROCEDURE dbo.SillyLongRun
  AS
exec dbo.SillyLongRun2
  GO 

/*
  =============================================================================================
CREATE DATE:     04/12/2010
LAST MODIFIED:    04/12/2010
CREATED BY:        SETH PHELABAUM
PURPOSE:        To run for a while.
ISSUES:            Totally Pointless
Notes:            Header here mainly to demonstrate the usage of stmt_start and stmt_end with fn_get_sql.                 This thing sucks up resources, so don't run it on a production box.
=============================================================================================
exec dbo.SillyLongRun2 

*/
  CREATE PROCEDURE dbo.SillyLongRun2

AS 

; WITH
  -- Tally table Gen            Tally Rows:         X2                        X3
t1 AS (SELECT 1 N UNION ALL SELECT 1 N),        -- 4                ,    8
t2 AS (SELECT 1 N FROM t1 x, t1 y),                -- 16                ,    64
t3 AS (SELECT 1 N FROM t2 x, t2 y),                -- 256                ,    4096
t4 AS (SELECT 1 N FROM t3 x, t3 y),                -- 65536            ,    16,777,216
t5 AS (SELECT 1 N FROM t4 x, t4 y),                -- 4,294,967,296    ,    A lot
Tally AS (SELECT CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) as bigint) N
FROM t5 x, t5 y) -- Change the t3's to one of the other numbers above for more/less rows 

SELECT N, CAST(N as varchar(30)), DATEADD(ms,n,0)
  FROM Tally

(You can call the above with exec dbo.SillyLongRun)

As mentioned in the headers, you would ideally keep a log of when you were notified about things.  Different alerts could be scheduled to have a different frequency.  Perhaps you only want to be notified about certain things once a week, but other things you want to be notified about once an hour until they are taken care of.  This is where a logging table comes in.  I won’t go into that here, but wanted to mention it.

Now that we have the queries, we need to set up the report.  I’m going to assume that you already have Reporting Services set up.  Here is a Screenshot of a very basic report that I created to pull in the data.

ReportSetup

ReportDesignMode

Once you deploy this report, there are a couple more things you need to do before you can create a data driven subscription for it.  The first is to set up a shared Schedule.  Log into your reports server (http://localhost/reports) and go to Site Settings at the top.  Then click on schedules at the left and New Schedule.  For this one I’m just going to create a basic 15m recurring schedule.

SharedSchedule

Next we need to modify the credentials of the shared data source used for the report.  My data source name for the report is SS2K8CL100.  To modify it, I go back to Home –> Data Sources –> SS2K8CL100.  The below screenshot shows me modifying it to use a windows account.

SecuritySettings

Now, we’re ready to create the data driven subscription.  Rather than explain it in text, I’ve taken screenshots of each step of creating a data driven subscription.

DDS DDS2 DDS3 DDS4 DDS5 DDS6 DDS7

Click Finish and you have your report.

image

In closing, I’ll note that I had a lot of problems getting Reporting Services to function correctly on my windows 7 installation, so this isn’t as polished as I would have liked.  I didn’t get the email working and I forgot to include SPID anywhere on the report (pretty useful piece of information to have)

 
2 Comments

Posted in All