Hi,
I see very different performance when running the below setup in the same set of queries in SSMS and SQLCMD locally on a SQL server 2008 R2 cluster.
Scripts to run:
create new empty database and one empty table with a clustered index.
SSMS 1 sec / SQLCMD 1 sec
Insert 10K rows SSMS 11 sec / SQLCMD 3 min 7 sec
drop db SSMS 0 sec / SQLCMD 1 sec
I've tried to run a SQL server profiler with Showplan XML and SQL:stmtCompleted enabled.
In both cases the showPlan for the statements show that we are doing a clustered index insert and the read/writes/durations are comparable.
I've had a look at this post: http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/777c6113-d190-4080-8508-6344b03bbb91 and tried to add the SET statements from SSMS->Tools-Options->query executions->SQL server to the scripts that we run but with no change in performance.
One thing that DID help is to disable the count of row affected by the TSQL statement that is being returned as a part of the result. This makes the execution via. SQLCMD perform with more or less the same elapsed time. So when the result that SQLCMD receives as a client contains a less amount of data then the elapsed time is lowered. Unfortunately this is not a fix to the solution as the meta data is used by the receiving application.
So what is causing the performance difference - SSMS vs. SQLCMD ?
Does the two use different network protocols ? Its a clustered environment so I cant force the SQLCMD to used sharedmemory.
Should I adjust the screenbuffer size of the Command Prompt ?
Could it be network related even if I run this locally on a cluster ?
Any kind of input is appreciated. Thanks