Quantcast
Channel: Forum SQL Server Database Engine
Viewing all 15889 articles
Browse latest View live

Questions on sql 2016 Always encyptions

$
0
0

Hi Experts,

Have few questions on SQL 2016 Always Encryption.

Question1) Can I un-encrypt/remove encryption on a column which has always encypted turned on and has some data?
We are using a powershell command to turn on encryption. do we have option to turn off?

Currently we are doing encyption using below powershell commands
 $encryptionChanges = @()
 $encryptionChanges += New-SqlColumnEncryptionSettings -ColumnName dbo.Customer.SSN -EncryptionType Deterministic -EncryptionKey $cekName
 $encryptionChanges += New-SqlColumnEncryptionSettings -ColumnName dbo.Customer.City -EncryptionType Deterministic -EncryptionKey $cekName
 Set-SqlColumnEncryption -ColumnEncryptionSettings $encryptionChanges -InputObject $database

Question2) Suppose my always encrypted column length is varchar(10) now I wanted to increase the length or decrease the column using alter stmt. Can we do that or any limitations around that?

Question3) I have a table with 3 columns say c1,c2,c3 and table has say 100 rows in it.
Now Can i extra columns to the table , drop the encrypted column ? Can we do that?

Thanks,

Sam


SQL Cluster shrinking Tempdb

$
0
0
When you fail over an sql cluster should that shrink the Tempdb or is a reboot required?

SQL Agent job becomes corrupted

$
0
0

I have a physical server running Windows Server 2019 Standard and SQL Server 2019 - both fully patched. The server had been running fine for some time, but then started rebooting at exactly the same time every night. I eventually tracked this down to the SQL Agent backup job. Every step in the job worked, but running the job on demand caused the server to reboot immediately. I deleted the job and recreated it (scheduled to run at a different time a few minuted earlier) and the problem went away.

A couple of weeks later, the same thing happened again, with the server rebooting at the new time for the agent job. I deleted the job and recreated it, again scheduled for a slightly different time. A couple of weeks later, the same thing has happened again. Fortunately, this is a development server so it doesn't have a big impact for us, but other people could find this business critical.

AT TIME ZONE 'GMT Standard Time' from UTC returns offset where it should not

$
0
0

Hey, 

I have a table where transaction , and all dates are stored in UTC
the server is in AZURE and it's SQL SERVER 2016 
when I run the following query I get very strange results 

SELECT T.TransactionDate AT TIME ZONE 'GMT Standard Time' DATEinGMT,T.TransactionDate
FROM dbo.Transactions AS T
WHERE T.TransactionDateKey = 20190331
      AND T.TransactionDate AT TIME ZONE 'GMT Standard Time'
      BETWEEN '2019-03-31 00:59:00 +00:00' AND '2019-03-31 01:01:00 +00:00';



and here is what is get in output 

DATEinGMT                        TransactionDate
2019-03-31 02:00:16 +01:002019-03-31 01:00:16 +00:00
2019-03-31 02:00:03 +01:002019-03-31 01:00:03 +00:00
2019-03-31 00:59:44 +00:002019-03-31 00:59:44 +00:00
2019-03-31 00:59:58 +00:002019-03-31 00:59:58 +00:00
2019-03-31 00:59:48 +00:002019-03-31 00:59:48 +00:00
2019-03-31 00:59:23 +00:002019-03-31 00:59:23 +00:00
2019-03-31 02:00:53 +01:002019-03-31 01:00:53 +00:00
2019-03-31 00:59:40 +00:002019-03-31 00:59:40 +00:00
2019-03-31 00:59:31 +00:002019-03-31 00:59:31 +00:00
2019-03-31 00:59:38 +00:002019-03-31 00:59:38 +00:00


it seems like AT TIME ZONE function in the where clause has different values from the select part 

any explanation for this ? 

How to get database out of single user

$
0
0

Hi

SQL Server 2005. Had changed database to single user/read only option. Now I cant change database back to multi user and all commands are getting stuck.

Msg 924, Level 14, State 1, Line 1
Database 'db1' is already open and can only have one user at a time.

I have tried all below and failed to get around this issue

sp_dboption db1, 'single', false

alter database db1 multi_user with rollback immediate

I cannot select the spid with as I get Msg 924 with below commands which all get stuck.

select spid from sysprocesses where dbid = (id of database)

sp_who does not show

What can be done to get out of this as this is a production server and difficult to restart midweek.

SSL Certificate not visible from SQL Configuration Manager

$
0
0

Hi all,

for some reason I am not able to see the certificate from the Configuration Manager --> SQL server network configuration --> Protocols for MSSQLSERVER when I right click and select the Certificate under the dropdown.
Certificate was imported from sys admin guy and I can see it under Certificates' Personal folder of the Console Root - certificates (Local Computer). It also looks that is configured and imported properly and in line with the requirements under the Microsoft's links below:

http://technet.microsoft.com/en-us/library/ms191192.aspx
http://technet.microsoft.com/en-us/library/ms189067%28v=sql.105%29.aspx

The version of the operating system is where SQL server resides is Windows Server 2012 Standard Edition and SQL Server is 2012 developer edition.

Since my sql server services Engine is running under service account with Deny Logon Locally domain policy, I started the service as LocalSystem and open the Configuration Manager with an administrative account, but still didn't worked.

Feedbacks on this issue are highly appreciated.

Cheers

Change tracking auto cleanup not working

$
0
0

We have a production clustered instance running SQL Server 2014 SP3 CU1 (version 12.0.6205.1). It hosts (amongst others), two databases with change tracking enabled. Both have auto cleanup enabled and retention period set to 30 days.

I have noticed however, (querying sys.dm_tran_commit_table) that one has retained transactions going back to 4th November 2019 and the other even further back - 2nd December 2018. The same databases in our test environment both have retained transactions back to 31st December 2019, as I would expect.

By chance, I restored backups of the two production databases onto a dev machine the other day for unconnected reasons but, checking those copies now show transactions back, once again, to the expected 31st December 2019.

The only difference I can see is that the test and dev servers are running SQL Server 2014 SP3 CU4 (we are waiting for a suitable window to upgrade production). Is this a bug in SP3 CU1 or could something else be causing it?

Understanding the mdf file format

$
0
0

I know this does not really fit the section, but I think the other sections do not fit any better, so I write here. Please move, if I missed a better place for this post.

I want to have a rough understanding of the mdf format and the best source of information I found is https://docs.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver15Questions:

1. If the PFS lists information for 8088 pages, why is the second PFS page 8088 and not 1 (first_PFS_page) + 8088 + 1 =8090? And are the other metadata pages (like GAM) treated the same way as data pages from the PFS point of view?
2. How does the database server determine the supposed size of a mdf file? Form my understanding page 0 does not contain a size information, but the server has to parse either GAM or PFS, correct? (probably GAM, as it is more coarse-grained and thereby faster?)
3. How can the database determine whether there is an n-th PFS page or the database mdf file is broken, if the last page of the n-1th section was allocated?



Why SQL server start do the redo operation from Minimum Recovery LSN (MinLSN) instead of last checkpoint during crash recovery?

$
0
0


Suppose I have a transaction,and go through the following Steps.

Step1,begin transaction
Step2,update tableA set column1=2
Step3,checkpoint happened (this is last checkpoint before system crashed)
Step4,update tableA set column1=4
Step5,system crash happened

Also,from the MSDN
The MinLSN is the minimum of the:

LSN of the start of the checkpoint.
LSN of the start of the oldest active transaction.
LSN of the start of the oldest replication transaction that has not yet been delivered to the distribution database.


Then during the crash recovery,the SQL Server will do the Redo phrase from the MinLSN,
So for the above example,Based on the MinLSN definition,the MinLSN is the log generated from the Step2.

So my question is why SQL server start do the redo operation for the log generated from the Step2?
My understanding is because the last checkpoint has already flushed the dirty pages to the dish for Step2,so I think it is not make sense to do it again.
Can the SQL server do the Redo operation just from last checkpoint log instead of logs generated from Step2?
Thank you.


 



Attaching the datafiles to database

$
0
0

USE

Master;


CREATE

DATABASE[AcquiredData3]  

   

ON(FILENAME='J:\Backup\M\AcquiredData.mdf'),  


   

(FILENAME='J:\Backup\M\AcquiredData_Log.ldf')  

   

FORATTACH; 

Msg 5120, Level 16, State 101, Line 2

Unable to open the physical file "J:\Backup\M\AcquiredData.mdf". Operating system error 5: "5(Access is denied.)".

How do I resolve this issue, I am using SQLserver Management studio as administrator and logged in as sa account

SQL Agent Job History is wiped out

$
0
0

Hi All,

One of a index rebuild job runs weekly once on Sunday. It runs fine and we get an email.
Somehow lately realized that when I try to view the job history , it is EMPTY .  any specific reasons why the job history is being wiped out? Is there any settings for job history which is resulting in wiping out job history info?

Thanks,
Sam

shrinking is very very slow ... any reasons ?

$
0
0

Hi All,

Recently in one of our sub-prod env , the drive space got filled up.

I tried to pull out free space information using dmvs and see if there is any room for shrink operation.

USE dbname

GO

DBCC SHRINKFILE (N'db_name_dat' , 0, TRUNCATEONLY)

GO

It was not at all releasing space to OS. Later after waiting for 30-40 mins , I killed the Shrink statement.

I wrote a custom sql script which actually try to shrink the mdf file in small chunks (i.e. 50 MB). even then it is taking long time. its been more than a day, it released only 3 GB space so far and still the shrink job is running.

Note: The SPID was never blocked.

Thanks,

Sam

how to monitor Longest Running Transaction using SQL Server Agent Alerts

$
0
0

hi there:

  I understand that this article has explained in details on how to monitor longest running transactions using agent alerts

https://www.sqlservercentral.com/articles/monitoring-longest-running-transaction-using-sql-server-agent-alerts

however, the author also put a note saying it only works for databases under read committed snapshot isolation level. 

Since most of my databases are not snapshot isolation level, is there still a way to use sql agent to detect longest running

transactions and send out alerts. 

 I fully understand how to complete this task using sql server schedule jobs but would prefer to use sql agent alerts as it puts less pressure on the job and gives me near real-time alerting . 

Thank you

Hui


--Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

Indexes and fragmentation

$
0
0

Hi,

Couple of questions on Indexes. Seeing excessive fragmentation for a specific index quite often.

- We have rebuilt  the index = 9 very recently 3 days back and now the fragmentation shows up 88.63. What does it indicate?
- Why there is so much fragmentation only in  index_id = 9 and not other indexes ?

- how can avoid or minimize fragmentation ?


Note:
Index fill factor for all index is 0 or 100%
Index_id = 9 is defined on a nvarchar(255) column
Table row count = 3823573

Thanks,
Sam


Record Kill commands

$
0
0
Is there any way to record the details of a kill command run through SSMS. Hostname,Process, Date etc. 

Deadlock graph analysis

$
0
0

I am using extended events to gather deadlock information on a sql2008R2 server.

'Input buffer' usually shows the sql statement that's being executed.

Here in one of my deadlock graphs, I dont see any information in inputbuffer nor the execution stack.

<executionStack>
    <frame procname="" line="62" stmtstart="4116" stmtend="4666" sqlhandle="0x03000a004e241b15f6f11400d3a700000100000000000000">
    </frame>
    <frame procname="" line="1" sqlhandle="0x01000a0019de5b31784b8e5b000000000000000000000000">
    </frame>
   </executionStack>
   <inputbuf>
   </inputbuf>


SQL Server 2019 > Polybase > Cosmos MongoDB | Missing Columns from schema

$
0
0
I have an external table in my SQL 2019 server; linked to a cosmos database via polybase. 
The database has a number of different types of documents with differing JSON schemas in them. These are visible in the azure data explorer and the one i am struggling with looks something like this.

    {
    "_id" : ObjectId("fb5ab7f8443c4d0298e19e51"),
    "type" : "myType",
    "dateRange" : {
    "start" : {
    "$date" : 1581453600000
    },
    "end" : {
    "$date" : 1581454800000
    }
    },
    "External1Id" : ObjectId("093d1319314d479bbc03bc69"),
    "External2Id" : ObjectId("9af08a9eee264fd5bfceb4fd"),
    "created" : {
    "$date" : 1581366945000
    },
    "updated" : {
    "$date" : 1581366945000
    }
    }

I can create an external table and link it to most fields; but i cant get the second and third id fields to get accepted by the schema validation. The schema is saying there are 181 columns in the table. These have unique names in the document. 


    CREATE EXTERNAL TABLE dbo.tblMyObjects(
      _id             NVARCHAR(128) NOT NULL
    , type            NVARCHAR(128) NULL
    , dateRange_start DATETIME2 NULL
    , dateRange_end   DATETIME2 NULL
    , created         DATETIME2 NULL
    , updated         DATETIME2 NULL) WITH(DATA_SOURCE = CosmosDB, LOCATION = N'mydocumentdb.qa');
    GO

The returned error is

    xxyy was not found in the external table


If i create an external table with the recommended schema; only the columns in my table above are populated with data. The ID fields using Nvarchar follows the recommendation in the type mappings document on bol.)

The books online flattening notes say call the fields after the first element in the json; but these documents don't have one so i was expecting adding External1Id nvarchar(128) would get me the data. Instead i get the error not found in the external table; have also tried using max to the same result) 

The schema validator rejects this as a field; and there isnt an alternative in the data. Is there Am i right to expect the name here to be External1Id or should it be something else? Is there a way to debug what is going on here or any recommendations on getting the errant columns to manifest in the polybase schema?

Supporting Cumulative updates of SQL Server 2012 for TLS 1.2 installation

$
0
0

Hi Team,

We are using SQL Server 2012 Service Pack 3(11.0.6020.0).My client needs to add Tls 1.2 to this version to avoid missing of application connections. Could you please suggest me that, Is this possible after installing Cumulative Updates(with CU Number) or apply Service Pack 4, what is the best option ? 

SQL Server Install File Missing Language File

$
0
0

Three days ago I purchased sql server standard 2019. I am receiving an error message when I click setup.exe. The message states that "this sql server setup media does not support the language of the OS, or does not have the SQL server english-language version installation files".

My OS is running English. I downloaded English version for SQL Server. I have double-checked my computer region and languages setup for the language, which exists and is selected in both region and languages. I cannot find the language file in the installation files for sql server. I am unable to proceed with installation. 

I have called Microsoft and spoken with various teams including Tech Support, Professional Team, MS Volume Licensing Team, and more. No one offered actual assistance other than requesting my personal information and setup. Once receiving that information each team routed me to another team. I was given two different case numbers from two different teams. Each subsequent team informed me the case numbers were invalid for their team and only good for the previous team. I have spent at least 3 hours on the phone being passed from team to team.

I have visited support.microsoft.com/OAS where I was told by the Professional Team to submit an online ticket as they were not accepting my previous tickets and refused to offer me any assistance without an online ticket. An online ticket costs $500. I just spent 3K+ on the product. I just want to install it.

I am using Windows 10. Attempting to install SQL Server Standard 2019. Running on Windows Server 2019 Standard.

Query scalability issue

$
0
0

I have a database which holds data for several LDAP directories, it is bulk refreshed on a schedule and maintenance is run immediately after so that a series of interactive reports can be executed. The reports are programmatically generated and often join between the collections of directory data, for example, objects in LDAP directory 1 may have an attribute typefoo whose value matches objects in LDAP directory 2 with an attribute type ofbar with the same value. The two tables of interest are as follows:

CREATE TABLE [DirectoryObjects] (
    [Id] int NOT NULL IDENTITY,
    [DistinguishedName] nvarchar(832) NOT NULL,
    [DirectoryConfigurationId] int NOT NULL,
    CONSTRAINT [PK_DirectoryObjects] PRIMARY KEY ([Id]),
    CONSTRAINT [FK_DirectoryConfigurationId]
      FOREIGN KEY
    ([DirectoryConfigurationId])
      REFERENCES
    [DirectoryConfigurations] ([Id]) ON DELETE CASCADE
);

CREATE TABLE [DirectoryAttributes] (
    [Id] int NOT NULL IDENTITY,
    [Name] nvarchar(128) NOT NULL,
    [Value] nvarchar(832) NULL,
    [DirectoryObjectId] int NOT NULL,
    CONSTRAINT [PK_DirectoryAttributes] PRIMARY KEY ([Id]),
    CONSTRAINT [FK_DirectoryObjectId]
      FOREIGN KEY
    ([DirectoryObjectId])
      REFERENCES
    [DirectoryObjects] ([Id]) ON DELETE CASCADE
);

I have the following initial attempt at setting up indexes:

CREATE NONCLUSTERED INDEX [IX_DirectoryObjects_DirectoryConfigurationId] ON [dbo].[DirectoryObjects]
(
	[DirectoryConfigurationId] ASC
) ON [PRIMARY]
GO

CREATE NONCLUSTERED INDEX [IX_DirectoryObjects_Id_DirectoryConfigurationId] ON [dbo].[DirectoryObjects]
(
	[Id] ASC,
	[DirectoryConfigurationId] ASC
) ON [PRIMARY]
GO

CREATE UNIQUE NONCLUSTERED INDEX [IX_DirectoryObjects_DirectoryConfigurationId_DistinguishedName] ON [dbo].[DirectoryObjects]
(
	[DirectoryConfigurationId] ASC,
	[DistinguishedName] ASC
) ON [PRIMARY]
GO

CREATE NONCLUSTERED INDEX [IX_DirectoryAttributes_DirectoryObjectId__Name_Value] ON [dbo].[DirectoryAttributes]
(
	[DirectoryObjectId] ASC
)
INCLUDE([Name],[Value]) ON [PRIMARY]
GO

CREATE NONCLUSTERED INDEX [IX_DirectoryAttributes_DirectoryObjectId] ON [dbo].[DirectoryAttributes]
(
	[DirectoryObjectId] ASC
) ON [PRIMARY]
GO

USE [PSU_DCVALIDATION2]
GO

CREATE NONCLUSTERED INDEX [IX_DirectoryAttributes_Name__Value_DirectoryObjectId] ON [dbo].[DirectoryAttributes]
(
	[Name] ASC
)
INCLUDE([Value],[DirectoryObjectId]) ON [PRIMARY]
GO


Essentially a table for the long ids of each object in each directory, and a table for the object attribute key/value data.

The following query is programmatically generated:

DECLARE @P_T0 INT = 1;
DECLARE @P_T0_C0 NVARCHAR(128) = N'sAMAccountName';
DECLARE @P_T0_C1 NVARCHAR(128) = N'sAMAccountType';
DECLARE @P_T0_C2 NVARCHAR(128) = N'userAccountControl';
DECLARE @P_T0_C3 NVARCHAR(128) = N'lastLogonTimestamp';
DECLARE @P_T0_C4 NVARCHAR(128) = N'givenName';
DECLARE @P_T0_C5 NVARCHAR(128) = N'sn';
DECLARE @P_T1 INT = 2;
DECLARE @P_T1_C0 NVARCHAR(128) = N'alternateID';
DECLARE @P_T1_C1 NVARCHAR(128) = N'description';
DECLARE @PageOffset INT = 0;
DECLARE @PageSize INT = 1000;
WITH [T0] ([C0],[C1],[C2],[C3],[C4],[C5]) AS
(
SELECT [DA0].[Value],[DA1].[Value],[DA2].[Value],[DA3].[Value],[DA4].[Value],[DA5].[Value]
FROM [DirectoryObjects]
LEFT JOIN [DirectoryAttributes] [DA0] ON [DirectoryObjects].[Id]=[DA0].[DirectoryObjectId] AND [DA0].[Name]=@P_T0_C0
LEFT JOIN [DirectoryAttributes] [DA1] ON [DirectoryObjects].[Id]=[DA1].[DirectoryObjectId] AND [DA1].[Name]=@P_T0_C1
LEFT JOIN [DirectoryAttributes] [DA2] ON [DirectoryObjects].[Id]=[DA2].[DirectoryObjectId] AND [DA2].[Name]=@P_T0_C2
LEFT JOIN [DirectoryAttributes] [DA3] ON [DirectoryObjects].[Id]=[DA3].[DirectoryObjectId] AND [DA3].[Name]=@P_T0_C3
LEFT JOIN [DirectoryAttributes] [DA4] ON [DirectoryObjects].[Id]=[DA4].[DirectoryObjectId] AND [DA4].[Name]=@P_T0_C4
LEFT JOIN [DirectoryAttributes] [DA5] ON [DirectoryObjects].[Id]=[DA5].[DirectoryObjectId] AND [DA5].[Name]=@P_T0_C5
WHERE [DirectoryObjects].[DirectoryConfigurationId]=@P_T0
),
[T1] ([C0],[C1]) AS
(
SELECT [DA0].[Value],[DA1].[Value]
FROM [DirectoryObjects]
LEFT JOIN [DirectoryAttributes] [DA0] ON [DirectoryObjects].[Id]=[DA0].[DirectoryObjectId] AND [DA0].[Name]=@P_T1_C0
LEFT JOIN [DirectoryAttributes] [DA1] ON [DirectoryObjects].[Id]=[DA1].[DirectoryObjectId] AND [DA1].[Name]=@P_T1_C1
WHERE [DirectoryObjects].[DirectoryConfigurationId]=@P_T1
)
-- SELECT [T0].[C0] P0,[T0].[C1] P1,[T0].[C2] P2,[T0].[C3] P3,[T0].[C4] P4,[T0].[C5] P5,[T1].[C1] P6
SELECT COUNT (*)
FROM [T0]
LEFT JOIN [T1] ON [T0].[C0]=[T1].[C0]

WHERE NOT ([T0].[C0] IS NULL AND [T0].[C1] IS NULL AND [T0].[C2] IS NULL AND [T0].[C3] IS NULL AND [T0].[C4] IS NULL AND [T0].[C5] IS NULL AND [T1].[C1] IS NULL)

--ORDER BY [P0] -- DESC
--OFFSET @PageOffset ROWS FETCH NEXT @PageSize ROWS ONLY
With the current indexes, the count query works reasonably quick.

Reversing the comments so the result query produces inconsistent results. If DESC is enabled, the result set is returned immediately whereas it takes some time. Further, removing the order by and paging returns the entire result set even quicker?

What can be done to improve this? I have the option to both create/remove indexes and statistics and refactor the code for a more efficient query.

Thanks!
Viewing all 15889 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>