How to Install SQL Server 2008 R2 or SQL Server 2012 Reporting Services on a Failover Cluster

November 21, 2013 Comments off

As many of you are aware, SQL Reporting Services is not cluster aware, so they say. But if you try adding it to a failover cluster installation, it does not give you that option because you are dealing with a cluster – but there is a way to do this. To be clear, SQL Server Reporting Services does not participate in the Windows Failover Clustering so therefor is consider not cluster aware from that standpoint.

Before you install a SQL Server failover node, you should do some planning before you install. For example, Analysis Services can’t be added after the first node has been installed without un-installing the entire thing. But, Reporting Service can be added later since it is not technically cluster aware.

My recommendation when installing a new SQL Server cluster failover node is to include Analysis Server and Native Reporting Services – you can always disable them later if you choose not to install them.

Let’s focus on Reporting Services for a minute. When installing your first node, you should include Native Reporting Services. If you don’t and want to go back and add it later, use the following setup switch to allow you to add this to any node on the cluster:

:\Setup.exe /SkipRules=StandaloneInstall_HasClusteredOrPreparedInstanceCheck /Action=Install

Even if you install SQL Server Reporting Services on the first node, when you install a second node, SQL Server Reporting Services will not be installed – you must use the above switch to add it after the node has been added.

There are two fundamental approaches you can use when installing SQL Server Reporting Services, install it on each node of the cluster that the instance is installed on OR, install it on servers that are not running SQL Server at all. Let’s go through some of the pros and cons of each.

If you have a small implementation and simply have a cluster setup for redundancy purposes and server load is not a concern, installing the SQL Server Reporting Services on each of the clustered nodes is not a bad thing, as long as it is installed on all nodes that the clustered instance can be active on and they are all joined in a scale-out-deployment. This will allow you to use the SQL Cluster Network Name to access the reporting services web services and reporting manager. This acts as a poor man’s load balancer, but the problem here is it will bring you to the reporting services that lives on the active node and doesn’t really load balance at all – it’s a convenience thing (no need to set up NLB or use something like a KEMP load balancer).

On the other hand, in a heavy use environment where load on the production SQL Servers is an issue, you should install SQL Server Reporting Services on dedicated servers, and point them to a database on the clustered instance and join them in a scaled-out-deployment. SQL Server Reporting Services does not have to be installed on a database server, just needs a database to store it’s configuration in. While SQL Server Reporting Services appears to be instance aware – it’s just a website that you configure with a URL that typically happens to have the instance name in it (which it does not have to have). Once you have the servers installed and the joined in a scaled-out-deployment, you can throw NLB or load balancers over the top of them to perform REAL load balancing.

There are always multiple ways to skin a cat, consider all your options before taking any one approach to solving your reporting needs. Consider off-loading all of your reporting to a replica server in SQL Server 2012 (there are several options that you can consider in 2008 AND 2012).

Hope this helps!

-Eric Niemiec

Categories: .Net

CRM Dynamics 2013 SSRS Data Connector on Clustered SQL Server 2012 – Reports will not be published…

November 19, 2013 Comments off

You think this would work by default, or at least that Microsoft support would figure it out fairly quickly – but not so fast…

This is most likely the first indication that you have a problem when the reports not installing:

“Reports will not be published for some organizations because the Report Server instances used by these organizations differ from the instance selected during Microsoft Dynamics CRM Reporting Extensions Setup”.

If you do not see the message that tells you which organization it will install the reports for, don’t bother going any further as the reports will not be installed:

Even using this tool will not get you out of this trouble here! It just fails with no error message.

I have a 2 node cluster of SQL Server 2012 SP1 that I wanted to host the CRM databases on, installing the base CRM software and databases in to the cluster worked just fine – but when it came to installing the reports in to the reporting server, that was a challenge. At first I thought it was because my database server name (clustered network name in this case) was fully qualified in one place and not in another, so I fixed that – no dice.

Since I have two nodes in my cluster, I installed SQL Reporting Services on both nodes and joined them in a scaled-out-deployment. You should use a load balancing technology like NLB or KEMP load masters, but you can cheat if you want to during your testing phase and hi-jack the clustered network name of the SQL Server instance – doing this provides no load balancing, the report server more-or-less follows which server the instance is active on.

When installing CRM Dynamics, I used the clustered network name for the SQL Reporting Server URL, which was where I went wrong. To get the reports to install in to SQL Reporting Services, you need use the name of the server that the instance is active on, install the reports and then update the organization to use the clustered network name (or some other load balanced name/technology) later after the reports are successfully installed.

The problem is easy to resolve, just follow these instructions:

  1. On the active node, make sure that the failed SSRS Data Connector is installed (with no reports installed as a result) – don’t think you can skip this step!
  2. In the _MSCRM configuration database, in the Organization table, set the ‘AreReportsPublished’ flag to False
  3. Disable the Organization in the Deployment Manager on the web server that you are installed CRM Dynamics on
  4. Select Edit Organization in Deployment Manager
  5. Update the SQL Server Reporting Services URL with the name of the active node: http://<active_node_servername>/ReportServer_<instancename&gt;
  6. Save your changes and enable the organization
  7. Uninstall the SSRS from the active node
  8. Re-install the SSRS Data Connector on the database server and it will tell you that the report will be installed!!

If the above steps don’t work, make sure you update the ‘AreReportsPublished’ flag to False or it will not attempt to install the reports again as it thinks it already did when the failed install completed:

After you see that the reports are properly installed, repeat steps 3 – 6 above and set the URL back to the clustered URL and test the CRM Dynamics for reports – you should see them and be back in business!

http://<network_clustered_name>/ReportServer_<instancename&gt;

Organization with active node’s URL:

Organization with clustered network name URL (which is where I wanted to end up):

Checking for reports in Native Report Manager:

Checking for Reports in CRM Dynamics 2013 Available Reports:

Happy Reporting!

-Eric Niemiec

Categories: Dynamics CRM

Microsoft CRM Dynamics 2013 – Setup Error: \SQLAccessGroup not found

November 13, 2013 Comments off

I have run in to this issue several times now, the first time when installing the database in the parent domain and the web server in the child domain, but I have also run in to this issue when installing the database and web servers in the same domain. The solution to this issue is simple, just follow the steps below. If you are installing cross domain, you will run in to other issues as it is un-supported. You can use this registry key during setup to skip the checks so that you can continue the way you want, this key gets deleted after every time you run setup: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\IgnoreChecks (DWORD) = 1.

  1. When you get the error during setup, open your domain controller and locate the domain groups that it just finished creating:

  2. You will see that they are all Domain Local groups, you need to change this to Universal!
  3. Open each group, change the scope to Universal, press Apply, then change it to Global and press ok.

  4. Then all of your groups should look like this [Universal not Global]:

  5. [CORRECTION - THEY MUST BE UNIVERSAL GROUPS - I ran in to an issue later in the process that required these to be Universal rather than Global]
  6. Return to your CRM Dynamics setup and press the Retry button for it to continue on its merry way.

Categories: Dynamics CRM, SQL Server

SQL Server XML – FOR XML PATH not returning empty XML for NULLS, OMITTING NULLS

February 8, 2013 Comments off

I have a simple query that joins an Appointments table to a Calendars table (INNER JOIN). Then I have a situation where if the Appointment item is associated with a Lead, it is joined to the Leads table to grab a few other pieces of information, if the Appointment item is not, it is considered a note (LEFT OUER JOIN). I assumed (silly me) that the XML would render the same way that a table would render and include the NULL columns/values, and would just leave them blank. Not the case – the same query, two different XML nodes in the result set contained two different sets of fields.

Here is the original query, that returns two different set of fields in the same result set if one row has a corresponding lead and one doesn’t.

Below is the resulting XML – from the same query. The columns marked in yellow only exist when the LEFT OUTER JOIN matches, but when it doesn’t, it omits the XML tag completely in the second row.

If you look at the SQL Server documentation, there are several ways of getting this to work – switching to FOR XML EXPLICIT is one of them among many. I found an easy and acceptable solution by formatting the select list a bit and was able to solve my problem. The only down site to this is that the string fields were returned empty as expected, but integer fields came back with a zero, not empty – a problem I decided I could live with. Some of the other solutions I found online required a lot more work and knowledge of XML ‘ELEMENTS XSINIL’. Again, doesn’t look very hard to use this method, but when you have vendors pulling this data and expecting the XML to look one way, then you through in things like this: <FieldName xsi:nil=”true” />, it tends to throw them off a bit.

Here is the updated query, modifying only the select portion of the statement and using a CASE statement on those fields involved in the LEFT OUTER JOIN and that could come back as null.

And here is the results from the above query with the failed LEFT OUTER JOIN fields highlighted in yellow.

Another day, another thing learned.

Categories: SQL Server, T-SQL Tags: ,

WSS3 to SharePoint 2013 Discussion Board Conversion

February 3, 2013 Comments off

Ah yes, the summer of 2008 brings back so many memories – one of which was losing part of my summer to a WSS2 to WSS3 conversion when I have never written a line of SharePoint code before in my life. When performing an in-place upgrade of WSS2 to WSS3, the conversion process didn’t properly thread the discussion board posts on the other side, so we had to roll our own like we eventually normally need to do and wrote a conversion utility that would extract all of the discussion board information from the WSS2 site, re-order it and load it back in to the WSS3 site.

Flash forward almost 5 years and I find myself in a similar situation, it’s time to upgrade again, this time to SharePoint 2013 Foundation. This time we chose to forgo the conversion process entirely for several reasons; First, we decided to re-skin the site and had a third party do the work for us. Second, we thought this would be a good time to purge the site and not bring forward the junk that has accumulated of the years. Last, we knew that the conversion process in the past failed, so why bother now.

It was all a great idea and going well until someone said that we could not lose the discussion boards in the upgrade process! My heart sank and I looked to third party vendors to deal with this, the ones we found were very expensive and the demo did not work worth a damn. Long story short, I just sent about 20 hours of my weekend digging out my old code (much was still applicable) and reworking it to solve this problem. The biggest challenge this time around was not ordering the posts (which was different than the last round), it was dealing with the fact that the users who performed the post does not exist yet in the new site and that particular user might not even exist in our domain anymore. So coming up with the process to resolve the user name, look it up in our Active directory, add the user to the site if it exists and use a dummy domain user for all the posts where the user does not exist anymore – not fun to figure out, but all possible (worst written routine in my life).

Below is a screen shot of the utility that I wrote to do the dirty work, not all options are on the screen and what I would call a lot of hacks to get it working just right. Since SharePoint 2013 has a 5000 item per list limit, we had to break our discussions down in to smaller units and ‘Archive’ them – kind of worked out nicely, they will be read-only to our users and the current 2 years will still be read/write. Behind the scenes, the one board is broken up in to smaller ones and the CALM query to the WSS3 server defines the break point dates for each list. Attachments are also downloaded from the old server and re-uploaded to the new server every time one is encountered (also another annoying requirement of the project but fairly easy to do).

If you really need the source code for this, let me know – it could save you from buying a third party application from a leading SharePoint conversion ‘authority’ J I would have spent at least $5000 dollars on a conversation utility that I spend less than 20 hours on – and could resell I guess. If you are not a C# programmer with SharePoint experience, don’t bother rolling your own or looking at this code, too much to deal with – pay someone to do it for you or contact me and you can contract to me to get you converted quickly. eric@techinternals.com

SharePoint is a GREAT thing when it does exactly what it is you want it to do (unless you’re the admin, then it sucks anyway)

Until next time…

SQL Server 2012 SP1 CU2 Native Backup to Azure Blob Storage

January 30, 2013 Comments off

There have been plenty of blog posts recently about the new feature in SQL Server 2012 SP1 CU2 that allows you to natively backup from SQL Server to your blob storage in Azure. After seeing the posts, I figured I would take it for a test drive. Based on the following blog post, I started poking around:

http://sqlblog.com/blogs/sqlos_team/archive/2013/01/24/backup-and-restore-to-cloud-simplified-in-sql-server-2012-sp1-cu2.aspx

This post refers to a few MSDN tutorials, the Azure price calculator and a few downloads for SQL Server 2012 CU2 to get this feature working. While all these articles explain most of the details, here are a couple of things to keep in mind while looking to use this feature in production – or just maybe in your development environment:

  • While you can use T-SQL to backup to Azure and perform restores from Azure, the native backup screen in SQL Server does not allow you to backup or restore from Azure. I have some T-SQL examples below, but the article mentioned above and the MSDN tutorials also have examples.
  • Determine if you truly need Geo Redundant storage in Azure as the costs are slightly higher as compare to Locally Redundant storage. For around $100/mo, you can get 1,475GB of Locally Redundant storage as compared to only 1,075GB of Geo Redundant storage. I choose to create one storage account for each type as reflected in my T-SQL and Azure screen shots below for testing purposes.

    You can check out the Azure pricing calculator yourself at:

    http://www.windowsazure.com/en-us/pricing/calculator/?scenario=data-management

  • You MUST turn on SQL Server backup compression (unless you have other 3rd party compression tools) if you want to reduce your Azure costs. I backed up the same empty database below with and without compression just to see what the different would be, obviously a test with your production databases would give you more precise comparisons. The image at the bottom shows 3.69MB uncompressed as opposed to 768KB compressed. You can turn on compressed at the instance level, the database level or at the individual backup level. If you choose to push only an occasional backup to Azure as the first steps of a new off-site DR plan, then you may choose only to compress these backups using the WITH COMPRESSION key words depending on CPU and storage capabilities are in your production environment.

  • There is nothing built in to the current tool set (SQL Server) that will help you manage your backup history in Azure. In other words, if you don’t write an application or manually manage your backup retention in Azure, your monthly costs will quickly get out of control. Might be a good job for a Jr. DBA or Developer to take care of.

Backing up to Azure is a nice to have feature. It allows a development teams or a small shops to create off-site backups with ease – especially if that team has an MSDN account and gets some Azure storage for free (ok, included with the MSDN account as nothing is ever really free). This also applies to Microsoft Partners – some storage is included with your membership at all levels.

This could also be a good stepping stone for shops that already have good in-house backup strategies in place, but no off-site capabilities to just occasionally push a copy-only backup to Azure as added protection against an internal failure – or to just share a database with another office or development team. Make sure you are aware of all your legal requirements pertaining to your backups BEFORE you start pushing them out to Azure (consider HIPPA, SOX and PCI compliance issues).

I would strongly recommend implementing Transparent Data Encryption (TDE) on your database before pushing it to Azure (call me cautious) – keeping your certificate & key backups safely in a different place.

Obviously this is only a preview of what the SQL and Azure Teams can with Azure blog storage – and I assume SP2 will bring more tools to make the integration more robust.

Categories: Azure, SQL Server, T-SQL

Unable to Install SQL Server 2008 R2 Management Tools – Complete and Unable to Edit SQL Server 2008 R2 Maintenance Plans

January 6, 2013 Comments off

I ran in to an issue after upgrading an SQL Server 2008 R2 SP0 Express installation to SQL Server 2008 R2 SP2 Standard Edition where I tried to create maintenance plans but then was unable to edit the maintenance plan.  I finally figured out that I was missing the ‘Management Tools – Complete’ option in the installation, the only problem was that when I went to add new features to an existing instance, it would not allow me to select that one additional checkbox.

What I found was that if I ran the Upgrade again, then went to add features to current installation, it finally allowed me to check that one check box, then after the install was complete, I was able to finally edit the maintenance plans.

To run the edit upgrade, run the setup.exe, select Maintenance, then Edition Upgrade and then procedure through the wizard for the instance you need to fix – it will be quick as there is really nothing it needs to do and does not seem to harm anything (I did this to three servers).

Then, without closing the original installer base screen, select Installation, then New Stand-alone SQL Server, then Add Features to existing instance, then you will be able to select ‘Maintenance Tools – Complete’ – no problem!

Categories: SQL Server Tags:
Follow

Get every new post delivered to your Inbox.