I have a pretty straightforward SELECT query that joins several tables on integer primary keys and ends in a very simple WHERE clause of "where table1.IsActive = 1". I am injecting artificial data in my tables to test the performance of the query. If I have 1,000 rows in each table, the time does not even register, taking less than a second to execute. However, if I have 10,000 rows in each table, the execution time jumps up to 18 seconds, even on repeat executions. I have tried replacing the selected columns with a count(*), but that also does not affect the execution time.
If I completely remove the WHERE statement, the query speeds back up to 0 seconds. I do not consider 10,000 rows to be a lot of data. I tried the Database Engine Tuning Advisor, but it is not offering any index suggestions. I would be happy to add more indexes if it would help, since these tables are not updated often.
This is a SQL Server 2008 R2 server. Shouldn't SQL Server be able to run faster with this size of dataset? What am I missing?