I have a piece of code which is doing some odd things, one of which is generating a bad plan - or at least a few million bogus reads and 10 seconds of CPU instead of 100ms, when the value is null. The code looks something like:
select t1.foo, t2.bar, t3.baz from sometable t1 inner join anothertable t2 on t1.pk = t2.fk inner join yetanothertable t3 on t2.pk = t3.fk ... where (@arg1 is null or @arg1 = t1.somecolumn1) and (@arg2 is null or @arg2 = t1.somecolumn2) and (@arg3 is null or @arg3 = t2.somecolumn3) and (@arg4 is null or @arg4 = t3.somecolumn4) and ...
but with more tables and more where clauses.
FWIW it occurs in an SP that is called from another SP, not from the app or command line.
Now, there is one particular arg, call it @arg7, which is sometimes passed in null, and when it is, execution costs 100x more logical reads and 100x more CPU.
But, why should that happen? It's not a very selective argument in the first place.
It looks like SQL recompiles that statement when it sees the null parameter (maybe it's recompiling that statement for other reasons), but why should a null value cause the query to run 100x worse?
SQL 2008 R2 standard 64 bit, fwiw.
And so far, an attempt at a simple repro script has not been able to show the problem.
--
OK, it's actually worse than that.
If I pass in an invalid value (int) for @arg7, it does not seem to cause the where clause to fail on matching rows, yet when we pass in valid values it seems to return the correct results.
It's very disturbing.
So much so that I'm in total denial about that part, let's just address the performance aspect if we can, and maybe that will shine some light on this as well.
Thanks,
Josh