Disabling SSL v3 on Windows 2008 R2, 2012 R2 Server & ASP.Net Applications

February 15, 2015 Comments off

We have been seeing the warnings from our vendors for weeks, telling us that they are turning off support for SSL v3 – not sure how this would affect us, we basically ignored the emails. Several weeks later, would found ourselves having to deal with the problem head on and now we know how it affects folks like us.

Not only do we sell gift certificates from our web sites but we also have several integrations with outside vendors. We quickly identified that it was an SSL v3 issue causing the problem, but could not find an all-inclusive article on how we should resolve the problem in our environment.

Here are some of the errors that we were seeing from our C# and ASP.Net 4.0 applications:

  1. The underlying connection was closed: An unexpected error occurred on a send. —> System.IO.IOException: The handshake failed due to an unexpected packet format.
  2. The underlying connection was closed: An unexpected error occurred on a receive.
  3. The client and server cannot communicate, because they do not possess a common algorithm
  4. The handshake failed due to an unexpected packet format.

What first set me on the right path as we too were having a problem with a different pay gateway, was this article (my responses are at the bottom of it):

http://stackoverflow.com/questions/26742054/the-client-and-server-cannot-communicate-because-they-do-not-possess-a-common-a

After doing partial fixes from various articles we found that the solution was simple, if you fully implemented the entire solution.

First, in your .Net code, you need to include the line:

ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls (for all .Net 4.0 and lower applications)

ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 (for all .Net 4.5 and higher applications)

For reference: https://msdn.microsoft.com/en-us/library/system.net.securityprotocoltype(v=vs.110).aspx

I simply put this line in the constructor of all the classes that I had where I performed some type of encrypted communication with a vendor. In some classes I already had this line, but it was setting it to the Ssl3 option. Also note that ONLY TLS 1.0 is available in .Net 4 and below, you can only use TLS 1.1 or TLS 1.2 in .Net 4.5 and above.

The second and final fix was to make sure that the protocols were correctly configured on the servers, Server 2012 R2 in my case. What I found was a mix of people assuming that some protocols were set one way and only setting the ones that they thought they cared about for their situation – especially when other teams might have been using the server for a .Net application of a different level.

Here is my entire registry settings file, it completely turns off SSL v2 & SSL v3 for client and server applications and enables the TLS protocols.

After applying these registry settings, adding that single line of code to my .Net 4.0 applications and rebooting the server – ALL PROBLEMS went away. I have repeated this fix across multiple servers including Windows 2008 R2, 2012 and 2012 R2.

The only thing you need to be aware of is that when you upgrade your applications to .Net 4.5 and above, you need to change the SecurityProtocolType in you code and modify the registry settings to disable TLS 1.0 and TLS 1.1 and only keep TLS 1.2 enabled.

Here is a good resource to test your servers (and browser) to make sure everything is configured as you need it to be, it will show you how you rank and what you should fix. Be careful not to over-fix your server for your situation otherwise you will break everything again. The initial goal is to disable SSL v3, not TLS 1.0.

https://www.ssllabs.com/ssltest/

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client] “DisabledByDefault”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client] “Enabled”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server] “DisabledByDefault”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server] “Enabled”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client] “DisabledByDefault”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client] “Enabled”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server] “DisabledByDefault”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server] “Enabled”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client] “DisabledByDefault”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server] “DisabledByDefault”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client] “DisabledByDefault”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server] “DisabledByDefault”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] “DisabledByDefault”=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] “Enabled”=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] “DisabledByDefault”=dword:00000000

Categories: .Net

How to get the Server Name, Instance Name, Computer Name, Data Path, Log Path and Backup Path from a SQL Server 2014 Cluster

December 17, 2014 Comments off

From time to time I need to write dynamic scripts that need the current instance name and sometimes more specifically, the actually Computer Name that I am on in the cluster that I am working with. Here are a couple of helpful scripts help get me that information.

Some of the SERVERPROPERTY calls do not work on SQL Server 2008 R2 and you might need to default to the registry calls if you really need the data.

— gives you the ‘machine name': ‘SQL-MYINSTANCE’

SELECT SERVERPROPERTY (‘MachineName’)

— gives you the ‘server name': ‘SQL-MYINSTANCE\MYINSTANCE’

SELECT SERVERPROPERTY (‘ServerName’)

— gives you the local ‘ComputerName’ of the server in the cluster

SELECT SERVERPROPERTY(‘ComputerNamePhysicalNetBIOS’)

— this also gives you ‘physical machine name’ for the machine that you are running the command on: ‘S-TOL-SQL3′

DECLARE @dir VARCHAR(4000)

EXEC master.dbo.xp_regread
N’HKEY_LOCAL_MACHINE’,
N’SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName’,‘ComputerName’,
@dir output
SELECT @dir

— gets the default data path

SELECT SERVERPROPERTY(‘INSTANCEDEFAULTDATAPATH’)

— gets the default log path

SELECT SERVERPROPERTY(‘INSTANCEDEFAULTLOGPATH’)

— gets you the default backup path

DECLARE @dir VARCHAR(4000)

EXEC master.dbo.xp_instance_regread
N’HKEY_LOCAL_MACHINE’,
N’Software\Microsoft\MSSQLServer\MSSQLServer’, ‘BackupDirectory’,
@dir output
SELECT @dir

Categories: .Net

How to move a SQL Server 2008 R2 or SQL Server 2012 clustered instance to a new SQL Server 2014 cluster

November 25, 2014 Comments off

Here are the steps that I took while upgrading a SQL Server 2008 R2, 2012 & 2014 mixed version SQL Server cluster that was running on Windows Server 2008 R2 to a new cluster running strictly SQL Server 2014 and Windows Server 2012 R2. I don’t cover all the steps (such as moving AS and SSRS), but I will try to give a good overview of my environment and what I needed to do and prepare for.

We started with a 2 node Windows 2008 R2 Server cluster running SQL Server 2008 R2 with 8 instances on it. Over time, we have slowly upgraded some SQL Server instances to SQL Server 2012 and some all the way to SQL Server 2014, but at the end of the day, I have all three versions of SQL Server running on this cluster. With our load increasing, it was time to add another node to the cluster for performance and HA purposes but took this as a good opportunity to upgrade the entire cluster in the process. This blog is dedicated to the first step of that move – taking one instance from the old cluster and moving it to the new cluster.

We built the new Dell R920 as a single node cluster running Windows Server 2012 R2 and ONLY SQL Server 2014 atop a new VNX2 SAN. Our goal will be to move half of our instances over to the new cluster, tear down one of the Dell R910 servers, reinstall it with Windows Server 2012 R2 and add it to the new cluster. Then moving the other half of the instances to the new cluster, then tear down the remaining Dell R910 server and reinstalling it with Windows Server 2012 R2 and adding it to the new cluster forming a new Windows 2012 R2 three node cluster running only SQL Server 2014. We have purposefully decided to skip upgrading the pre-2014 instances to SQL Server 2014 to save time during this process and because we can do this! (When you restore a database from a previous version of SQL, the restore process automatically upgrades the database for you). During this type of upgrade, we will temporarily lose some of our HA capabilities as one cluster or the other will be in a single node cluster until the entire process is complete.

I tried to document the entire process, so here we go…

  1. Since the instance that I am moving is named SQL-PRIVATE\PRIVATE, I simply named the new cluster SQL-PRIVATEX\PRIVATE since both will be renamed at the end. My plan is to move all of the databases to the new instance, rename the old instance to SQL-PRIVATEZ\PRIVATE, then rename the new instance back to the original name SQL-PRIVATE\PRIVATE. Doing it this way, no clients needed to know about the change as the connection strings never changed (DNS just takes a few minutes to catch up on the new IP – see commands below to help speed this up).
  2. Since SQL Server 2014 can now handle CSV volumes for the database engine but not AS, I made a rule that I would use shared disks for any instance that had AS installed, and CSV’s for all the others. When Microsoft finally decides to support AS in CSV’s, I will move the files. We are however using a local SSD drive for our TempDB (new feature starting in SQL Server 2012).  Be aware that the current version of Microsoft DPM DOES NOT support backing up databases that live on CSV volumes.  As we are currently migrating from DPM to DataDomain and Networker for our backups, I had to back up to Azure using the new Backup to URL feature in SQL Server 2014 until our new backup solution is in place.
  3. I made sure that the new instance on the new cluster is configured the exact way that I wanted it to be, very similar if not identical to the old instance on the old cluster, this included things such as the following: (I have created scripts for most of these steps)
    1. Making sure that all my users from the old instance have been moved over to the new instance, follow this KB article to help you out: http://support.microsoft.com/kb/918992 (this is a bit on an art as you need to modify the file before you execute it, you may have old users, be referring to default databases that don’t exist yet, etc.)
    2. Made sure all my instance level options were set such as fill factor, compression and paths
    3. Setup all my agent operators
    4. Configured my database mail settings
    5. Set my custom agent properties using the above agents and database mail settings to name a few.
    6. Recreated all the linked servers that existed on the old instance
    7. Be aware of your any reporting server or analysis server configurations.
    8. Took a look at all the jobs that would need to be recreated on the new instance after the databases were moved – including any backup jobs.
    9. In my example, this instance did not have AS or SSRS installed to worry about.
  4. I created a script to help me generate the backup scripts and restore scripts from the old instance on the old cluster. It worked well enough for me, I just needed to update the path when restoring to the new instance, but it got me close enough – especially for instances with a lot of databases on them such as SharePoint sites. Here is how it worked:
    1. Run one script to generate the backup folders and the backup commands
    2. Execute the backup commands to backup the databases on the old instance
    3. Copy the backup files to the new instance
    4. Run the second script to generate the restore commands on the old instance
    5. Copy the resulting commands to the new instance and fix the restore paths as they will be slightly different.
    6. Execute the restore commands on the new instance

      (please keep in mind that the databases in this example were simple (not small) databases, ONE data file and ONE log file – you will have to change the script or the restore commands ‘WITH MOVE’ option if you have more than that)

  5. I practiced everything above on the new instance before getting to the point where I tested renaming the clustered instances. In my case, I had a full clustered test instance to test the entire process with before running through it on a production instance.
  6. Here is how my night went on the final run:
    1. Stopped all applications and web sites that were talking with any of the databases. You might need to put your old instance in single user mode to stop applications from talking to it. Check Activity Monitor to make sure all conversations have stopped.
    2. Stopped all Agent and backup jobs, including things like DPM or other third party backups programs.
    3. Backed up the databases using the scripts mentioned above
    4. Took all of the databases in the old instance offline to make sure any services or applications were not connecting to them (I would rather have failed connections than to be modifying data in ‘old’ databases).
    5. Copied to the database backups to the new server
    6. Restored the databases using the scripts mentioned above
    7. Updated the db owner and compatibility modes
    8. Set to simple backup, shrank the database and log files
    9. Renamed the old instance to SQL-PRIVATEZ\PRIVATE using this kb article: http://msdn.microsoft.com/en-us/library/ms178083.aspx making sure to run these commands on the server that I performed the rename on:
      1. ipconfig /flushdns
      2. ipconfig /registerdns
      3. nbtstat –RR
    10. Waited about 5 minutes for the change to propagate in DNS
    11. Renamed the new instance to SQL-PRIVATE\PRIVATE and ran these commands on the new instance server:
      1. ipconfig /flushdns
      2. ipconfig /registerdns
      3. nbtstat –RR
    12. Waited about 5 minutes for the change to propagate in DNS
    13. Checked my routes on the new server and confirmed that other servers could see the new IP address
    14. Tested one of my applications from a remote server
    15. Set the databases back in to full recovery mode and reset/re-enabled my backups and performed a full backup to kick off the set
    16. Re-enabled (after recreating) my Agent jobs
    17. Tested my Linked servers (after creating them)
    18. Enabled all the applications, load balancers and websites
    19. Monitored the new instance!
  7. A couple of things you should think about and consider as most folks have many instances and have to repeat the same steps many time:
    1. Create generic scripts to do your works, ones that are not instance specific
    2. Make sure to upgrade all of the databases to SQL Server 2014 compatibility level and SA as the owner rather than a domain account that you were logged in with. (or some other owner – remember, if your database is owned by a domain user and that domain user goes away – you will have problems).
    3. Make sure to use the same port if a static port was used on the old instance

Good luck, it really did turn out to be an easy process after the research was done. Note: renaming the old cluster did generate an error (Windows 2008 R2) but it did work just fine by hitting Cancel after the error.

Good luck!

-Eric Niemiec

Creates the backup folders and the script to execute to create the backups:

(I am sure there are better ways to write some of the components of these scripts)

USE MASTER

GO

— To allow advanced options to be changed.

EXEC sp_configure ‘show advanced options’, 1

GO

— To update the currently configured value for advanced options.

RECONFIGURE

GO

— To enable the feature.

EXEC sp_configure ‘xp_cmdshell’, 1

GO

— To update the currently configured value for this feature.

RECONFIGURE

GO

EXEC sp_MSforeachdb @command1=

— set the database to the current database

USE [*]

IF (DB_NAME() NOT IN (”master”,”model”, ”tempdb”, ”msdb”))

BEGIN

    DECLARE @logLogicalFileName VARCHAR(100)

    DECLARE @logFilePathName VARCHAR(1000)

    DECLARE @dataLogicalFileName VARCHAR(100)

    DECLARE @dataFilePathName VARCHAR(1000)

    DECLARE @tmpMD VARCHAR(1000)

    DECLARE @databaseName VARCHAR(500) = DB_NAME()

    DECLARE @defaultBackupDirectory VARCHAR(1000) = REPLACE(CONVERT(VARCHAR(1000), SERVERPROPERTY(”INSTANCEDEFAULTDATAPATH”)), ”\DATA\”, ”\Backup\”)

    SELECT @logLogicalFileName = b.name, @logFilePathName = b.filename FROM master.dbo.sysdatabases a INNER JOIN master.dbo.sysaltfiles b ON a.dbid = b.dbid WHERE a.dbid = DB_ID() AND FileId = 1

    SELECT @dataLogicalFileName = b.name, @dataFilePathName = b.filename FROM master.dbo.sysdatabases a INNER JOIN master.dbo.sysaltfiles b ON a.dbid = b.dbid WHERE a.dbid = DB_ID() AND FileId = 2

    SET @tmpMD = ”MD ” + @defaultBackupDirectory + ”TempTransfer\”

    EXEC master.dbo.xp_cmdshell @tmpMD

    SET @tmpMD = ”MD ” + @defaultBackupDirectory + ”TempTransfer\” + REPLACE(@databaseName, ” ”, ”_”) + ”\”

    EXEC master.dbo.xp_cmdshell @tmpMD

    SET @defaultBackupDirectory = @defaultBackupDirectory + ”TempTransfer\” + REPLACE(@databaseName, ” ”, ”_”) + ”\” + REPLACE(@databaseName, ” ”, ”_”) + ”_” + CONVERT(VARCHAR(2), DATEPART(MONTH, GETDATE())) + ”_” + CONVERT(VARCHAR(2), DATEPART(DAY, GETDATE())) + ”_” + CONVERT(VARCHAR(4), DATEPART(YEAR, GETDATE())) + ”.bak”

    PRINT ”BACKUP DATABASE [” + @databaseName + ”] TO DISK = N””” + @defaultBackupDirectory + ””” WITH COPY_ONLY, NOFORMAT, INIT, COMPRESSION, STATS = 5, NAME = N””” + @databaseName + ””” ”

END

, @replacechar=‘*’

Creates the scripts to restore the databases on the new server (you will need to fix the beginning of the paths):

USE MASTER

GO

— To allow advanced options to be changed.

EXEC sp_configure ‘show advanced options’, 1

GO

— To update the currently configured value for advanced options.

RECONFIGURE

GO

— To enable the feature.

EXEC sp_configure ‘xp_cmdshell’, 1

GO

— To update the currently configured value for this feature.

RECONFIGURE

GO

EXEC sp_MSforeachdb @command1=

— set the database to the current database

USE [*]

IF (DB_NAME() NOT IN (”master”,”model”, ”tempdb”, ”msdb”))

BEGIN

    DECLARE @logLogicalFileName VARCHAR(100)

    DECLARE @logFilePathName VARCHAR(1000)

    DECLARE @dataLogicalFileName VARCHAR(100)

    DECLARE @dataFilePathName VARCHAR(1000)

    DECLARE @databaseName VARCHAR(500) = DB_NAME()

    DECLARE @defaultBackupDirectory VARCHAR(1000) = REPLACE(CONVERT(VARCHAR(1000), SERVERPROPERTY(”INSTANCEDEFAULTDATAPATH”)), ”\DATA\”, ”\Backup\”)

    SELECT @logLogicalFileName = b.name, @logFilePathName = b.filename FROM master.dbo.sysdatabases a INNER JOIN master.dbo.sysaltfiles b ON a.dbid = b.dbid WHERE a.dbid = DB_ID() AND FileId = 1

    SELECT @dataLogicalFileName = b.name, @dataFilePathName = b.filename FROM master.dbo.sysdatabases a INNER JOIN master.dbo.sysaltfiles b ON a.dbid = b.dbid WHERE a.dbid = DB_ID() AND FileId = 2

    SET @defaultBackupDirectory = @defaultBackupDirectory + ”TempTransfer\” + REPLACE(@databaseName, ” ”, ”_”) + ”\” + REPLACE(@databaseName, ” ”, ”_”) + ”_” + CONVERT(VARCHAR(2), DATEPART(MONTH, GETDATE())) + ”_” + CONVERT(VARCHAR(2), DATEPART(DAY, GETDATE())) + ”_” + CONVERT(VARCHAR(4), DATEPART(YEAR, GETDATE())) + ”.bak”

    PRINT ”RESTORE DATABASE [” + @databaseName + ”] FROM DISK = N””” + @defaultBackupDirectory + ””” WITH FILE = 1, ” +

    ”MOVE N””” + @dataLogicalFileName + ””” TO N””” + @dataFilePathName + ”””, ” +

    ”MOVE N””” + @logLogicalFileName + ””” TO N””” + @logFilePathName + ”””, ” +

    ”NOUNLOAD, REPLACE, STATS = 5”

END

, @replacechar=‘*’

Categories: SQL Server Tags:

How to Install SQL Server 2008 R2 or SQL Server 2012 Reporting Services on a Failover Cluster

November 21, 2013 Comments off

As many of you are aware, SQL Reporting Services is not cluster aware, so they say. But if you try adding it to a failover cluster installation, it does not give you that option because you are dealing with a cluster – but there is a way to do this. To be clear, SQL Server Reporting Services does not participate in the Windows Failover Clustering so therefor is consider not cluster aware from that standpoint.

Before you install a SQL Server failover node, you should do some planning before you install. For example, Analysis Services can’t be added after the first node has been installed without un-installing the entire thing. But, Reporting Service can be added later since it is not technically cluster aware.

My recommendation when installing a new SQL Server cluster failover node is to include Analysis Server and Native Reporting Services – you can always disable them later if you choose not to install them.

Let’s focus on Reporting Services for a minute. When installing your first node, you should include Native Reporting Services. If you don’t and want to go back and add it later, use the following setup switch to allow you to add this to any node on the cluster:

:\Setup.exe /SkipRules=StandaloneInstall_HasClusteredOrPreparedInstanceCheck /Action=Install

Even if you install SQL Server Reporting Services on the first node, when you install a second node, SQL Server Reporting Services will not be installed – you must use the above switch to add it after the node has been added.

There are two fundamental approaches you can use when installing SQL Server Reporting Services, install it on each node of the cluster that the instance is installed on OR, install it on servers that are not running SQL Server at all. Let’s go through some of the pros and cons of each.

If you have a small implementation and simply have a cluster setup for redundancy purposes and server load is not a concern, installing the SQL Server Reporting Services on each of the clustered nodes is not a bad thing, as long as it is installed on all nodes that the clustered instance can be active on and they are all joined in a scale-out-deployment. This will allow you to use the SQL Cluster Network Name to access the reporting services web services and reporting manager. This acts as a poor man’s load balancer, but the problem here is it will bring you to the reporting services that lives on the active node and doesn’t really load balance at all – it’s a convenience thing (no need to set up NLB or use something like a KEMP load balancer).

On the other hand, in a heavy use environment where load on the production SQL Servers is an issue, you should install SQL Server Reporting Services on dedicated servers, and point them to a database on the clustered instance and join them in a scaled-out-deployment. SQL Server Reporting Services does not have to be installed on a database server, just needs a database to store it’s configuration in. While SQL Server Reporting Services appears to be instance aware – it’s just a website that you configure with a URL that typically happens to have the instance name in it (which it does not have to have). Once you have the servers installed and the joined in a scaled-out-deployment, you can throw NLB or load balancers over the top of them to perform REAL load balancing.

There are always multiple ways to skin a cat, consider all your options before taking any one approach to solving your reporting needs. Consider off-loading all of your reporting to a replica server in SQL Server 2012 (there are several options that you can consider in 2008 AND 2012).

Hope this helps!

-Eric Niemiec

Categories: .Net

CRM Dynamics 2013 SSRS Data Connector on Clustered SQL Server 2012 – Reports will not be published…

November 19, 2013 Comments off

You think this would work by default, or at least that Microsoft support would figure it out fairly quickly – but not so fast…

This is most likely the first indication that you have a problem when the reports not installing:

“Reports will not be published for some organizations because the Report Server instances used by these organizations differ from the instance selected during Microsoft Dynamics CRM Reporting Extensions Setup”.

If you do not see the message that tells you which organization it will install the reports for, don’t bother going any further as the reports will not be installed:

Even using this tool will not get you out of this trouble here! It just fails with no error message.

I have a 2 node cluster of SQL Server 2012 SP1 that I wanted to host the CRM databases on, installing the base CRM software and databases in to the cluster worked just fine – but when it came to installing the reports in to the reporting server, that was a challenge. At first I thought it was because my database server name (clustered network name in this case) was fully qualified in one place and not in another, so I fixed that – no dice.

Since I have two nodes in my cluster, I installed SQL Reporting Services on both nodes and joined them in a scaled-out-deployment. You should use a load balancing technology like NLB or KEMP load masters, but you can cheat if you want to during your testing phase and hi-jack the clustered network name of the SQL Server instance – doing this provides no load balancing, the report server more-or-less follows which server the instance is active on.

When installing CRM Dynamics, I used the clustered network name for the SQL Reporting Server URL, which was where I went wrong. To get the reports to install in to SQL Reporting Services, you need use the name of the server that the instance is active on, install the reports and then update the organization to use the clustered network name (or some other load balanced name/technology) later after the reports are successfully installed.

The problem is easy to resolve, just follow these instructions:

  1. On the active node, make sure that the failed SSRS Data Connector is installed (with no reports installed as a result) – don’t think you can skip this step!
  2. In the _MSCRM configuration database, in the Organization table, set the ‘AreReportsPublished’ flag to False
  3. Disable the Organization in the Deployment Manager on the web server that you are installed CRM Dynamics on
  4. Select Edit Organization in Deployment Manager
  5. Update the SQL Server Reporting Services URL with the name of the active node: http://<active_node_servername>/ReportServer_<instancename&gt;
  6. Save your changes and enable the organization
  7. Uninstall the SSRS from the active node
  8. Re-install the SSRS Data Connector on the database server and it will tell you that the report will be installed!!

If the above steps don’t work, make sure you update the ‘AreReportsPublished’ flag to False or it will not attempt to install the reports again as it thinks it already did when the failed install completed:

After you see that the reports are properly installed, repeat steps 3 – 6 above and set the URL back to the clustered URL and test the CRM Dynamics for reports – you should see them and be back in business!

http://<network_clustered_name>/ReportServer_<instancename&gt;

Organization with active node’s URL:

Organization with clustered network name URL (which is where I wanted to end up):

Checking for reports in Native Report Manager:

Checking for Reports in CRM Dynamics 2013 Available Reports:

Happy Reporting!

-Eric Niemiec

Categories: Dynamics CRM

Microsoft CRM Dynamics 2013 – Setup Error: \SQLAccessGroup not found

November 13, 2013 Comments off

I have run in to this issue several times now, the first time when installing the database in the parent domain and the web server in the child domain, but I have also run in to this issue when installing the database and web servers in the same domain. The solution to this issue is simple, just follow the steps below. If you are installing cross domain, you will run in to other issues as it is un-supported. You can use this registry key during setup to skip the checks so that you can continue the way you want, this key gets deleted after every time you run setup: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\IgnoreChecks (DWORD) = 1.

  1. When you get the error during setup, open your domain controller and locate the domain groups that it just finished creating:

  2. You will see that they are all Domain Local groups, you need to change this to Universal!
  3. Open each group, change the scope to Universal, press Apply, then change it to Global and press ok.

  4. Then all of your groups should look like this [Universal not Global]:

  5. [CORRECTION – THEY MUST BE UNIVERSAL GROUPS – I ran in to an issue later in the process that required these to be Universal rather than Global]
  6. Return to your CRM Dynamics setup and press the Retry button for it to continue on its merry way.

Categories: Dynamics CRM, SQL Server

SQL Server XML – FOR XML PATH not returning empty XML for NULLS, OMITTING NULLS

February 8, 2013 Comments off

I have a simple query that joins an Appointments table to a Calendars table (INNER JOIN). Then I have a situation where if the Appointment item is associated with a Lead, it is joined to the Leads table to grab a few other pieces of information, if the Appointment item is not, it is considered a note (LEFT OUER JOIN). I assumed (silly me) that the XML would render the same way that a table would render and include the NULL columns/values, and would just leave them blank. Not the case – the same query, two different XML nodes in the result set contained two different sets of fields.

Here is the original query, that returns two different set of fields in the same result set if one row has a corresponding lead and one doesn’t.

Below is the resulting XML – from the same query. The columns marked in yellow only exist when the LEFT OUTER JOIN matches, but when it doesn’t, it omits the XML tag completely in the second row.

If you look at the SQL Server documentation, there are several ways of getting this to work – switching to FOR XML EXPLICIT is one of them among many. I found an easy and acceptable solution by formatting the select list a bit and was able to solve my problem. The only down site to this is that the string fields were returned empty as expected, but integer fields came back with a zero, not empty – a problem I decided I could live with. Some of the other solutions I found online required a lot more work and knowledge of XML ‘ELEMENTS XSINIL’. Again, doesn’t look very hard to use this method, but when you have vendors pulling this data and expecting the XML to look one way, then you through in things like this: <FieldName xsi:nil=”true” />, it tends to throw them off a bit.

Here is the updated query, modifying only the select portion of the statement and using a CASE statement on those fields involved in the LEFT OUTER JOIN and that could come back as null.

And here is the results from the above query with the failed LEFT OUTER JOIN fields highlighted in yellow.

Another day, another thing learned.

Categories: SQL Server, T-SQL Tags: ,
Follow

Get every new post delivered to your Inbox.