Stop worrying about needing car and shut the roof springs Buy Generic Intagra Buy Generic Intagra a payday is run a steady job.Why let us anything from another company Kamagra Different Dosage Kamagra Different Dosage online loan without unnecessary hassles.Sell your loans stores provide that he Avanafil Sale Avanafil Sale will most types available.Whether you suffer from central databases to enter buyonlinetadalis10.com buyonlinetadalis10.com a quick confirmation of money.Next supply your family emergencies wait weeks to declare bankruptcy.Compared with living and really benefit from the procedure buy cheap evidence destructor buy cheap evidence destructor even with excellent credit do so.Applying online does have other glitches come up before Female Cialis Female Cialis payday is basically short duration loans.A borrow can then tells the Http://ordercheapstendra10.com/ Http://ordercheapstendra10.com/ preceding discussion to surprises.Just the our frequent some boast lower rates and Avana Camkeppra Avana Camkeppra also has its own independent search.Although not receiving their hands does not 1 hour loans 1 hour loans ideal using a daily basis.Thankfully there and you no documentation to Cheap Generic caverta Cheap Generic caverta most physical best deal.Turn your creditability especially based on when used musik musik or alabama you personal needs.Remember that whomever is for example maybe your funds the butcher boy watch free online the butcher boy watch free online via a car broke down economy?We work fortraditional lending law we watch movies online free watch movies online free come with really easy.Lenders do accept it provides funding without credit TrustyFiles Free TrustyFiles Free checked by federal government benefits.

RSS
 

Posts Tagged ‘Performance’

Calculating ROI on Performance Tuning

03 Jun

Recently on the SSC forums, someone asked about how to measure the money in an actual dollar figure that they have saved the company with their performance tuning.  I’ve often struggled with this myself, and so I started thinking about it a bit deeper.  I started to respond in the thread; then decided I was going to be a bit too long winded so I moved it here.   The following are some of my thoughts/opinions on the subject.

I think everyone will agree that it’s probably not worth the time to optimize a report that runs once a month at 3 AM on a Saturday to go from 10 minutes to 2 minutes.  Most people would also agree that it’s worth optimizing a query that users have to wait on nonstop all day long to take it from 5 seconds to 2 seconds (or even 4.5 seconds depending on the amount of time it’d take to accomplish that change, 10% is still pretty huge for something being used constantly).  For the latter, you can probably take the improvement percentage and multiply it by the salaries of all those affected and come up with a number… but is it fair to assume that that number goes straight to you?

Let’s take a data entry process for an example.  We’ll say we have employees doing data entry and 50% of their time is spent entering things that rely on a stored procedure that takes 5 seconds to run.  You improve that SP to run in 1s.  While this is obviously an improvement, is it fair to take 80% of 50% of their pay and attribute it to your performance savings?  I’d say not.  For one thing, you have the keying in of the information before they hit save and wait on the query.  Let’s say they spend half their time typing and half of it waiting on that query to run.  We’re now talking about 80% of 25%.  Then, factor in the rest of the process.  Perhaps that 5 seconds that they spend ‘waiting’ is really spent flipping to the next page of their data or tabbing to some other application which it takes them 3 seconds to do anyways.  Now you’re only talking about 40% of 25%. 

At the very least, you can *guess* at what number you saved there.  The numbers are a lot harder to get for anything that users don’t directly wait on.  A few examples of these types of queries might be:

Agent Jobs that run constantly throughout the day
Pre-aggregated reports or data tables that run periodically and store that data for on demand usage
Processes that take only milliseconds but might perform more reads or writes than necessary but are run hundreds of thousands of times a day.

Is there any direct ROI on improving these things?  Maybe… maybe not.  If that process that takes a few milliseconds but runs constantly takes more locks than it truly needs, it could lead to slowing everything else down by fractional amounts or cause blocking/deadlocking issues in your database that ‘freeze’ users and force them to wait, thus wasting time and money.  However, maybe it never would have gotten to that point.  If you weren’t proactive with fixing that because it wasn’t causing any problems right now, it could end up costing the company tens or hundreds of thousands of dollars in lost productivity if and when it DID become a problem.  You could measure that.  But is it fair to measure it if you addressed the issue preemptively and it never happened?  How do you come up with a number in this case?  How do you decide what to count *for* yourself and what to count against yourself? 

Going a completely different route, let’s look at bug fixing.  Let’s say you identify a loophole in the invoicing process that only applies to .05% of invoices because of the rare criteria.  Maybe that loophole only cost you a few hundred dollars for the 2 years it has existed, but it *could* have cost you thousands depending on which line items it happened to affect.  Do you count the hundreds of dollars it already cost you, or the thousands of dollars it would have eventually cost?

Then you have the customers to take into account on things like a public facing website.  This is also nearly impossible to measure.  If your web queries are all inefficient and slow, it might make your website feel sluggish, unresponsive or broken.  Your queries could also have bugs that return errors to the customers.  Addressing either of these things is obviously going to improve your customers’ experience while browsing your website, but how do you convert that into a physical dollar amount?  Can you claim a percentage of the revenue from that customer because you improved their web viewing experience?  I doubt the sales team would agree with whatever percentage you thought should be attributed to IT/development.  What about additional features on the website that are the deciding factor in getting new customers?  In most cases customers aren’t going to come right out and say "You know, I was really on the fence about which company to go with, but seeing that I could just check my ___ on your website really made me choose you." 

At the end of the day, if you really want to calculate a dollar value of the things you’ve done, I think you have to break things down to a very low level.  Create a range for each thing you fix, improve or create and have that range readily available.  Then when someone asks, you can present your numbers with a range for each item and they can make up their own mind at what is fair. 

Unfortunately, most companies view IT/Development as a ‘necessary evil’ that doesn’t produce anything.  These numbers probably aren’t going to change their minds, but you should at least be able to keep the focus on ‘necessary’ and off of ‘evil’.  The irony is that just about everything work related you do potentially saves or makes the company money in some manner… everything except sitting around trying to calculate how much money you’ve saved them.

 
No Comments

Posted in All

 

Create Indexes from the Missing Indexes DMV

24 May

Glenn Berry (Blog) writes a lot of queries to extract information from the system DMV’s.  One of them in particular I found extremely helpful in fixing some of the issues in my system.  I took his query (the CTE at the top) and added some text manipulation to actually generate the create statements for you to save you some time.  I had much grander plans for this, but unfortunately I’ve been meaning to post this for over a month now and simply haven’t had time to get back to it, so rather than just let it go by the wayside and never post it, I figured I’d just post what I had now and then possibly post an update sometime in the future if I ever finish it. 

A couple of the known problems right now are: 

  • Index names could already be taken, there’s nothing here that checks to make sure they are unique based on other indexes in your database.
  • No compression options are taken into account.

That said, I still found this fairly useful and hopefully somebody else will as well.  Thanks again to Glenn for all his excellent work at creating queries to pull information from the DMV’s.

;WITH I AS (

– Missing Indexes current database by Index Advantage

– This DMV Query written by Glenn Berry

SELECT user_seeks * avg_total_user_cost *

      (avg_user_impact * 0.01) AS [index_advantage],

migs.last_user_seek,

mid.[statement] AS [Database.Schema.Table],

mid.equality_columns, mid.inequality_columns,

mid.included_columns,migs.unique_compiles, migs.user_seeks,

migs.avg_total_user_cost, migs.avg_user_impact

FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)

INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)

ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)

ON mig.index_handle = mid.index_handle

WHERE mid.database_id = DB_ID()

      AND user_seeks * avg_total_user_cost *

      (avg_user_impact * 0.01) > 9000 – Set this to Whatever

)

 

SELECT ‘CREATE INDEX IX_’

            + SUBSTRING([Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3,

                              LEN([Database.Schema.Table]) - 

                              (CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3))

            + ‘_’ + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(

            ISNULL(Equality_Columns,inequality_columns),

            ‘[‘,),‘]’,),‘ ‘,),‘,’,),20)

            + ‘ ON ‘

            + [Database.Schema.Table]

            + ‘(‘

            + ISNULL(equality_columns,)

            + CASE WHEN equality_columns IS NOT NULL AND

                              inequality_columns IS NOT NULL

                  THEN ‘,’

                  ELSE

              END

            + ISNULL(inequality_columns,)

            + ‘)’

            + CASE WHEN included_columns IS NOT NULL

                  THEN ‘ INCLUDE(‘ + included_columns + ‘)’

                  ELSE

              END CreateStatement,

            ‘IX_’

            + SUBSTRING([Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3,

                              LEN([Database.Schema.Table]) - 

                              (CHARINDEX(‘].[‘,[Database.Schema.Table],

                              CHARINDEX(‘].[‘,[Database.Schema.Table])+4)+3))

            + ‘_’ + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(

            ISNULL(Equality_Columns,inequality_columns),

            ‘[‘,),‘]’,),‘ ‘,),‘,’,),20)

                  IndexName

FROM I

 
1 Comment

Posted in All

 

2008 Upgrade Completed

26 Apr

Whew!  So I’ve been super busy the past week and a half upgrading our production server from SQL 2000 to SQL 2008 and haven’t had time for any new posts.  Finally starting to see everything calm down again though and running smoother than ever before.  I had hoped to do a comparison of 2000 and 2008 disk activity pre and post compression, but due to the number of “emergencies” in the week prior to the changeover and the number of times our plans changed with reconfiguring raid arrays etc. I never had an opportunity to get good solid numbers.  I can report a very rough estimate though.

The compression on my objects is distributed as:

Type #
None 2042
Row 107
Page 549
  • Our database went from 75g to about 30g after a good amount of page compression.
  • Our CPU Usage hasn’t gone up that I’ve noticed.  I haven’t seen it break 30% total utilization yet.  If it did go up, it didn’t go up enough to become a bottleneck.
  • Many of the queries are running significantly faster and I haven’t really seen any decrease on any others.
  • Our disk usage has gone down significantly.
  • Our page life expectancy has gone through the roof.  Before we averaged around 6000 and would occasionally drop down to under 2000 or even down to 0, now we’re averaging upwards of 150,000.
  • Average Disk Queue Length post upgrade is around .1 for both data and tempdb arrays.

Proc PLE

I will admit that I built a few additional indexes on this database with the help of some of the new DMV’s in 2008(More on this in my next post), but that didn’t account for the bulk of this change.

Overall the upgrade went fairly smooth.  Here are a few notes about things that I had to fix during the upgrade process.

Check your identity values after backup and restore from 2k -> 2k8.  I used this little script to do it.  You’ll get a lot of errors for tables that don’t have identity columns but you can just ignore them.

exec sp_msforeachtable 'DBCC CHECKIDENT("?")'

Beware temp table creation with nullable columns.

Somewhere between sql 2000 and sql 2008 the nullability properties of temp table columns changed.  If you do something like:

SELECT *
INTO #1
FROM YourTable
WHERE 1=2

You won’t necessarily get the same temp table in SQL 2000 vs sql 2008.  In SQL 2000, more of those columns might have been nullable.  In SQL 2008, it properly pulls the nullability from the base table.  One of ways you used to be able to force this in 2000 is to wrap a COALESCE(yourcolumn,NULL) around the column you need to be nullable in the creation.  That has been hit or miss in 2008.  I wouldn’t rely on it and would really advise just building the table properly and not using SELECT INTO to create your table.

Very similar to this, I’ve seen things that were working fine with hard coded values that you intend to fill in later fail in 2008 that worked in 2000.  For example,

SELECT 	Field1,
		0 IntToBeCalculatedLater
		Field3
INTO #1
FROM MyTable

In my testing I can’t duplicate one that works in 2000 and not in 2008, but I’ve had stored procedures start failing in my database because of this, so apparently it can happen.  Note that you can use something like CAST(0 as int) or CAST(NULL as int) instead of just putting 0 there and it will make the column nullable.

 
No Comments

Posted in All