With the release of SQL Server 2016, folks like me get very excited to get their hands on the new software. However, in the past it has always been quite expensive, or very time-limited for an individual to get hold of a full Developer license.
Not so anymore - As part of the Visual Studio Dev Essentials program, a full developer license for SQL Server 2016 SP1 is now available for free. This includes all the tools such as SSMS, SSDT and a host of other products too, including Microsoft R Server and even a few quids worth of Azure is being thrown in.
In order to access the downloads you'll need to sign up for the program first (free), and then download away:
https://www.visualstudio.com/dev-essentials/
Also note that a new product such as SQL 2016 requires new sample databases - say goodbye to AdventureWorks and hello to the brand new Wide World Importers! Download all the latest samples from github here:
https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers
Interesting, however, is that there does not appear to be an SSAS solution build on the new samples. I guess very little has changed is SSAS in this release, so MS felt they need not bother. A bit irritating in my opinion, as it would be nice to have a consistent set of samples, and not have to switch back to old solutions to demo SSAS - gives the impression that SSAS is being less and less encouraged/supported. But maybe that is deliberate....
Happy downloading, and Happy New Year!
Showing posts with label SSMS. Show all posts
Showing posts with label SSMS. Show all posts
Monday, 2 January 2017
Tuesday, 27 January 2015
Manually Execute SSRS 2012 Subscriptions
By far the easiest place to setup a SQL
Server 2012 SSRS subscription for a report is through the Reporting
Services Report Manager. A simple GUI guides users through the
process of selecting a subscription type, where to save or send the output and on what
schedule to execute. For example, you can
specify the report to execute every morning at 9am and email the
report as a pdf to a set of users, or to save the output file to a
fileshare.
Now suppose you don’t want this
report subscription to email or save the file on an automated
schedule, but rather to be manually/actively triggered. Perhaps you
want to test the subscription is working or you want it to be
triggered by a third party application. There’s more than one way
of doing this, and here I will cover two methods:
When a subscription is created in the
SSRS Reports Manager, a corresponding SQL agent job is also created
on the SQL server. Unfortunately there is no way to control the name
of the job and, once created, it cannot be modified without affecting
the subscription. The name of the job will be an UID similar to the
highlighted item below:
Right-click on this job and select
“Start Job at step…” to run it. This will cause the
subscription to execute, and you will see in the SQL Server Reporting
Services Report Manager that the Last Run field for the subscription
has been updated. This is the simplest approach and is particularly
useful for testing the item:
Alternatively, a job request can be
made using SQL code:
EXEC
dbo.sp_start_job
N'AAB7E0BD-8089-4330-A3D6-8B95ACD90132'
This will have exactly the same effect
as right-clicking to execute the job.
The drawback to this approach is that
SQL Agent job execution requests for a single job do not queue or run
in parallel. If the job is already running then the subsequent job
execution request will fail:
Request
to run job AAB7E0BD-8089-4330-A3D6-8B95ACD90132 refused because the
job already has a pending request
Therefore if multiple users are going
to be issuing requests then this may not be the best approach.
Method 2 – Stored Procedure
If you open the
SQL Agent Job used for the subscription and examine the step that has
been defined, you will see that it is executing a stored procedure:
The AddEvent
stored procedure is being called:
exec
[ReportServer].dbo.AddEvent
@EventType='TimedSubscription',
@EventData='a7fd8d5f-0af9-49ec-ad8e-b273d44f1bb0'
This stored
procedure inserts a record in the ReportServer.dbo.Events table. The
Events table is polled every few seconds by the server and any
entries in this table will trigger the subscription to be executed.
Note that the @EventData variable value is the same as the SQL Agent
Job name.
This same SQL
stored procedure can in fact be run manually against the SQL Server
msdb database directly and will have exactly the same effect – it
will trigger the subscription to execute. However, as we are no
longer using the SQL Agent to call the procedure, we can actually run
this code multiple times as quickly as we like. Each time it will
insert a record into the events table, and every entry will trigger a
new subscription execution, even if it is the same subscription.
Effectively, subscription requests will be now be queued, and run
sequentially.
If a subscription
execution request needs to be made actively, or occur multiple times
in a short space of time, and each request needs to be completed,
then this method is a good approach to handling the scenario.
Labels:
2008 R2,
AddEvent,
data-driven,
example,
execution,
job,
manual,
schedule,
SQL,
SQL 2012,
SQL Agent,
SQL Server,
SSMS,
SSRS,
stored procedure,
subscription
Wednesday, 5 March 2014
The database principal owns a schema in the database, and cannot be dropped (Error: 15138)
On a recent audit of a database I noticed that there were a lot of logins for a database that were not required. I set about removing the logins one by one, but soon hit the following error:
Error: 15138 The database principal owns a schema in the database, and cannot be dropped.
Well this threw me a bit, and I could not figure out what was going on and why, this one particular login could not be removed. As the message indicates, this account owns a schema on the DB. So I opened the user properties from SSMS and sure enough, the user in question owned three schemas: db_datareader, db_datawriter, db_owner. However, SSMS will not simply allow you to unselect the checkboxes here, so it is still not so easy to remove this link and allow the user to be deleted!
The schemas owned by the user can also be displayed by running the following query, within the context of the database under scrutiny:
SELECT s.name FROM sys.schemas s
WHERE s.principal_id = USER_ID('UserInQuestion'');
For a more complete picture of all schemas and users associated with the database:
SELECT *, USER_NAME(principal_id) as username
FROM sys.schemas;
In order to remove this user we have to first transfer ownership of these schemas to a different user. Typically this would be dbo. This has to be done once for each schema owned, using the ALTER AUTHORISATION ON SCHEMA command:
ALTER AUTHORISATION ON SCHEMA::db_datareader TO dbo
ALTER AUTHORISATION ON SCHEMA::db_datawriter TO dbo
ALTER AUTHORISATION ON SCHEMA::db_owner TO dbo
(note the double colon!)
Re-run the previous scripts to ensure that the user in question no longer owns any schemas. Now if we again try the delete again, this user will be removed successfully.
Error: 15138 The database principal owns a schema in the database, and cannot be dropped.
Well this threw me a bit, and I could not figure out what was going on and why, this one particular login could not be removed. As the message indicates, this account owns a schema on the DB. So I opened the user properties from SSMS and sure enough, the user in question owned three schemas: db_datareader, db_datawriter, db_owner. However, SSMS will not simply allow you to unselect the checkboxes here, so it is still not so easy to remove this link and allow the user to be deleted!
The schemas owned by the user can also be displayed by running the following query, within the context of the database under scrutiny:
SELECT s.name FROM sys.schemas s
WHERE s.principal_id = USER_ID('UserInQuestion'');
For a more complete picture of all schemas and users associated with the database:
SELECT *, USER_NAME(principal_id) as username
FROM sys.schemas;
In order to remove this user we have to first transfer ownership of these schemas to a different user. Typically this would be dbo. This has to be done once for each schema owned, using the ALTER AUTHORISATION ON SCHEMA command:
ALTER AUTHORISATION ON SCHEMA::db_datareader TO dbo
ALTER AUTHORISATION ON SCHEMA::db_datawriter TO dbo
ALTER AUTHORISATION ON SCHEMA::db_owner TO dbo
(note the double colon!)
Re-run the previous scripts to ensure that the user in question no longer owns any schemas. Now if we again try the delete again, this user will be removed successfully.
Labels:
15138,
2008 R2,
alter,
authorisation,
database,
db_owner,
drop,
ERROR,
login,
prinipal,
schema,
SQL,
SQL 2012,
SSMS,
sys.schemas,
user_id,
user_name
Friday, 27 September 2013
SSAS 2008: Clear Cache Command
When testing query perfornance on an SSAS database it's important to know if you are starting from a warm or clear cache. The simplest way to clear the cache is through an XMLA command, run from SSMS:
<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Object>
<DatabaseID>MyDatabase</DatabaseID>
</Object>
</ClearCache>
This clears the cache for the entire database. You can also clear the cache just for a specific cube:
<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Object>
<DatabaseID>MyDatabase</DatabaseID>
<CubeID>MyCube</CubeID>
</Object>
</ClearCache>
<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Object>
<DatabaseID>MyDatabase</DatabaseID>
</Object>
</ClearCache>
This clears the cache for the entire database. You can also clear the cache just for a specific cube:
<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Object>
<DatabaseID>MyDatabase</DatabaseID>
<CubeID>MyCube</CubeID>
</Object>
</ClearCache>
Labels:
2008 R2,
Analysis Services,
Cache,
Clear Cache,
Cube,
database,
performance,
query,
SQL 2012,
SSAS,
SSMS,
XMLA
Thursday, 8 August 2013
SSAS - ProcessAdd explained
Further to my previous post on creating multiple partitions and multiple processing commands in a single xmla script (here and here) I thought I share one more idea around SSAS Processing tasks.
When looking the different types of processing options available in SSAS you tend to get the same list over and over:
ProcessFull,
ProcessDefault,
ProcessData,
ProcessIndexes,
ProcessUpdate,
ProcessIncremental,
ProcessClear,
ProcessAdd
I will not explain here the differences between each of these types as this info is readily available on plenty of other sites (such as http://technet.microsoft.com/en-us/library/ms174774(v=sql.105).aspx).
However if you try to use each of these processing types from management studio all will go well until you try and process a dimension using ProcessAdd. It may sound like a useful option (faster than ProcessUpdate, less impact than ProcessFull etc) but when you try and use it you will find it is not as straightforward as the others. If you right-click the dimension in SSMS and select Process you might think to just select it in the drop down. But take a close look at that drop down - ProcessAdd is not there!
But the documentation says it is definately an option. Script a different processing option to XMLA and change the process type to Add and it may work, but if you havent specified which records to add the results could be misleading. Confusing, right?
In order to use this processing option you have to actually stipulate quite a lot of details. This can be done with an XMLA script, but the process of building this script is quite tricky. Therefore i shall do my best to explain how to do it here :) (Greg Galloway gives a good overview of this here
http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=4, and the steps below were based on following his method)
1 - Use Management Studio (SSMS) to script out to XMLA a ProcessUpdate command. (I recommend editting the error settings to ignore/report duplicates before scripting)
2 - In BIDS in the Solution Explorer right click on the dsv (the dsv file that the relevant dimension is build from, if you have more than one) and select View Code. Copy the entire code and paste it into a new window in SSMS to allow us to edit it.
3 - We should now reduce this dsv copy down to only the relevant part. We only actually need the
<
xs:element name for the particular query that is relevant to our dimension. Tip - quick way to find this element would be to search for a unique string within the query that might single it out from the rest! Keep all data and tags within this particular tag and delete everything outside this (tags at the same level), but keep all the parent tags. SSMS should make it easy to expand and collapse the tags in order to determine was tags are above or below, and what ones are at the same level.
4 - Copy what remains and paste it into your original process script from step 1. The code should be copied immediately before the closing Process tag.
5 - Now modify the actual query to limit it to only new data. For example add a where clause that limits the results to be after a certain date.
6 - Voila - you now have your ProcessAdd script. It should look something like this the code below and ready to execute.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"><!--
Error Cofig--><
ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"><
KeyErrorLimit>-1</KeyErrorLimit></
ErrorConfiguration>
<!--
Processing details--><
Parallel><
Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"><
Object><
DatabaseID>MyDB</DatabaseID><
DimensionID>MyDim</DimensionID></
Object><
Type>ProcessAdd</Type><
WriteBackTableCreation>UseExisting</WriteBackTableCreation>
<!--
Copy from dsv goes here--><
DataSourceView xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:dwd="http://schemas.microsoft.com/DataWarehouse/Designer/1.0" dwd:design-time-name="4be9af80-0ae0-4f8a-936d-2103b944155f" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"><
ID>MyDSV</ID><
Name>MyDSV</Name><
DataSourceID>MyDS</DataSourceID><
Schema><
xs:schema id="MyDS" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop"><
xs:element name="MyDS" msdata:IsDataSet="true" msdata:UseCurrentLocale="true" msprop:design-time-name="2557b54c-2a08-4e49-ab45-fdd94dfb6dee"><
xs:complexType><
xs:choice minOccurs="0" maxOccurs="unbounded"><!--
Query to be modified is here--><
xs:element name="MY_DIM" msprop:QueryDefinition="SELECT 'HELLO' AS PHRASE FROM MY_TABLE
WHERE DATE_KEY > 20130801"
msprop:DbTableName="MY_DIM" msprop:QueryBuilder="GenericQueryBuilder" msprop:IsLogical="True" msprop:FriendlyName="MY_DIM" msprop:design-time-name="c4b7ad24-101e-4d41-bc04-7ac2b605cd7b" msprop:TableType="View"><
xs:complexType><
xs:sequence><
xs:element name="MY_ATTRIBUTE" msprop:design-time-name="1e60f905-098a-45f2-b5be-c34364bdf328" msprop:DbColumnName="PHRASE" minOccurs="0"><
xs:simpleType><
xs:restriction base="xs:string"><
xs:maxLength value="100" /></
xs:restriction></
xs:simpleType></
xs:element></
xs:sequence></
xs:complexType></
xs:element></
xs:choice></
xs:complexType></
xs:element></
xs:schema><
diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1" /></
Schema></
DataSourceView><!--
End of copy from dsv--></
Process></
Parallel>
</
Batch>
When looking the different types of processing options available in SSAS you tend to get the same list over and over:
ProcessFull,
ProcessDefault,
ProcessData,
ProcessIndexes,
ProcessUpdate,
ProcessIncremental,
ProcessClear,
ProcessAdd
I will not explain here the differences between each of these types as this info is readily available on plenty of other sites (such as http://technet.microsoft.com/en-us/library/ms174774(v=sql.105).aspx).
However if you try to use each of these processing types from management studio all will go well until you try and process a dimension using ProcessAdd. It may sound like a useful option (faster than ProcessUpdate, less impact than ProcessFull etc) but when you try and use it you will find it is not as straightforward as the others. If you right-click the dimension in SSMS and select Process you might think to just select it in the drop down. But take a close look at that drop down - ProcessAdd is not there!
But the documentation says it is definately an option. Script a different processing option to XMLA and change the process type to Add and it may work, but if you havent specified which records to add the results could be misleading. Confusing, right?
In order to use this processing option you have to actually stipulate quite a lot of details. This can be done with an XMLA script, but the process of building this script is quite tricky. Therefore i shall do my best to explain how to do it here :) (Greg Galloway gives a good overview of this here
http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=4, and the steps below were based on following his method)
1 - Use Management Studio (SSMS) to script out to XMLA a ProcessUpdate command. (I recommend editting the error settings to ignore/report duplicates before scripting)
2 - In BIDS in the Solution Explorer right click on the dsv (the dsv file that the relevant dimension is build from, if you have more than one) and select View Code. Copy the entire code and paste it into a new window in SSMS to allow us to edit it.
3 - We should now reduce this dsv copy down to only the relevant part. We only actually need the
<
xs:element name for the particular query that is relevant to our dimension. Tip - quick way to find this element would be to search for a unique string within the query that might single it out from the rest! Keep all data and tags within this particular tag and delete everything outside this (tags at the same level), but keep all the parent tags. SSMS should make it easy to expand and collapse the tags in order to determine was tags are above or below, and what ones are at the same level.
4 - Copy what remains and paste it into your original process script from step 1. The code should be copied immediately before the closing Process tag.
5 - Now modify the actual query to limit it to only new data. For example add a where clause that limits the results to be after a certain date.
6 - Voila - you now have your ProcessAdd script. It should look something like this the code below and ready to execute.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"><!--
Error Cofig--><
ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"><
KeyErrorLimit>-1</KeyErrorLimit></
ErrorConfiguration>
<!--
Processing details--><
Parallel><
Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"><
Object><
DatabaseID>MyDB</DatabaseID><
DimensionID>MyDim</DimensionID></
Object><
Type>ProcessAdd</Type><
WriteBackTableCreation>UseExisting</WriteBackTableCreation>
<!--
Copy from dsv goes here--><
DataSourceView xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:dwd="http://schemas.microsoft.com/DataWarehouse/Designer/1.0" dwd:design-time-name="4be9af80-0ae0-4f8a-936d-2103b944155f" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"><
ID>MyDSV</ID><
Name>MyDSV</Name><
DataSourceID>MyDS</DataSourceID><
Schema><
xs:schema id="MyDS" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop"><
xs:element name="MyDS" msdata:IsDataSet="true" msdata:UseCurrentLocale="true" msprop:design-time-name="2557b54c-2a08-4e49-ab45-fdd94dfb6dee"><
xs:complexType><
xs:choice minOccurs="0" maxOccurs="unbounded"><!--
Query to be modified is here--><
xs:element name="MY_DIM" msprop:QueryDefinition="SELECT 'HELLO' AS PHRASE FROM MY_TABLE
WHERE DATE_KEY > 20130801"
msprop:DbTableName="MY_DIM" msprop:QueryBuilder="GenericQueryBuilder" msprop:IsLogical="True" msprop:FriendlyName="MY_DIM" msprop:design-time-name="c4b7ad24-101e-4d41-bc04-7ac2b605cd7b" msprop:TableType="View"><
xs:complexType><
xs:sequence><
xs:element name="MY_ATTRIBUTE" msprop:design-time-name="1e60f905-098a-45f2-b5be-c34364bdf328" msprop:DbColumnName="PHRASE" minOccurs="0"><
xs:simpleType><
xs:restriction base="xs:string"><
xs:maxLength value="100" /></
xs:restriction></
xs:simpleType></
xs:element></
xs:sequence></
xs:complexType></
xs:element></
xs:choice></
xs:complexType></
xs:element></
xs:schema><
diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1" /></
Schema></
DataSourceView><!--
End of copy from dsv--></
Process></
Parallel>
</
Batch>
Labels:
2008 R2,
Analysis Services,
BIDS,
Cube,
element,
Process Add,
Process Cube,
script,
SQL 2012,
SSAS,
SSMS,
XMLA
Monday, 2 April 2012
SQL 2008: SELECT FOR XML
Recently I was tasked with implementing a solution to transfer data between two systems for a Financial Services company. The data had to be outputted, one entity (account, broker etc) at a time, from a SQL 2008R2 database to an xml file, before being inputted to the 3rd party application via a message queue.
One of the challenges was that each entity - and there were more than 30 of them - required the xml to be wrapped in different tags and have different headers to identify them.
The solution we choose was use the FOR XML clause, which has been enhanced in R2. The plan was to write the SELECT FOR XML statement in a stored procedure, and BCP it out to a file.
Sounds very simple. But in case you have to do the same thing I thought I'd highlight how we did it.
The SELECT...FOR XML will convert the output of your select statment into XML format, and the really useful bit is that with the PATH and ROOT options you can specify exactly what tag should be used to identify a row, and what tag should be used to identify the start and end of the result set. The following example uses a table from the AdventureWorks db:
SELECT TOP 2
ShiftID, Name
FROM HumanResources.Shift
FOR XML PATH('RowName'), ROOT('ResultSetName'), TYPE
Results:
----------------------------------------------------------------------------------------------------------------------------------
<ResultSetName><RowName><ShiftID>1</ShiftID><Name>Day</Name></RowName><RowName><ShiftID>2</ShiftID><Name>Evening</Name></RowName></ResultSetName>
(1 row(s) affected)
And nicely enough, if you click on the result in SSMS, it will display the results all pleasantly formatted:
However in the actual case being worked on, the result set itself had to be wrapped in further tags to identify what file it was to the message queue. Solution, nest this SELECT FOR XML statement inside another, enabling you to generate an XML hierarchy :
SELECT
( SELECT TOP 2
ShiftID, Name
FROM HumanResources.Shift
FOR XML PATH('NestedRow'),
ROOT('NestedRoot'), TYPE
) AS TestRow
FOR XML PATH ('RowName'),
ROOT ('ResultSetName'), TYPE
Note that if you only want one of the two tags (ROOT or PATH) then you can put in an empty string. Not putting in anything at all will result in default behaviour - check out the msdn reference for more on that.
So this code essentially is the answer to the problem. Put the above code into a stored procedure and call that from the slqcmd to output the results to file using the queryout option:
bcp "EXEC GetXml" queryout MyFile.xml -T -c
Alternatively, put the actual sql select statement into the BCP, but that looks rather messy and splits up your logic into two places, the database and the batch file, which means more maintenance. I would advise keeping it in the stored proc.
This solves the main part problem, but as we have 30+ entities requiring 30+ different wrapper tags we now need 30+ stored procs to be coded. To avoid having so much repeating code I opted to use dynamic sql and a simple lookup tag table to find the wrapper tags. The procedure was modified to require a parameter to be passed to it identifying what entity it was being executed for. This entity name would be in the lookup table with associated tags that would then be used to populate the dynamic sql and SELECT FOR XML statments.
CREATE TABLE [dbo].[lookup](
[entity] [varchar](50) NOT NULL,
[parameter_1] [varchar](50) NULL,
[parameter_2] [varchar](50) NULL,
[parameter_3] [varchar](50) NULL,
[parameter_4] [varchar](50) NULL,
[parameter_5] [varchar](50) NULL,
PRIMARY KEY CLUSTERED
(
[entity] ASC
)
[parameter_1] [varchar](50) NULL,
[parameter_2] [varchar](50) NULL,
[parameter_3] [varchar](50) NULL,
[parameter_4] [varchar](50) NULL,
[parameter_5] [varchar](50) NULL,
PRIMARY KEY CLUSTERED
(
[entity] ASC
)
GO
INSERT [dbo].[setting] ([entity], [parameter_1], [parameter_2], [parameter_3], [parameter_4], [parameter_5])
VALUES (N'account', N'Account', N'AccountList', N'', N'AccountDownload', NULL)
CREATE PROC [dbo].[get_xml] @entity VARCHAR(50)
INSERT [dbo].[setting] ([entity], [parameter_1], [parameter_2], [parameter_3], [parameter_4], [parameter_5])
VALUES (N'account', N'Account', N'AccountList', N'', N'AccountDownload', NULL)
GO
CREATE PROC [dbo].[get_xml] @entity VARCHAR(50)
AS
DECLARE
@SQL VARCHAR(MAX),
@MessageID VARCHAR(50),
@PARAM1 VARCHAR(50),
@PARAM2 VARCHAR(50),
@PARAM3 VARCHAR(50),
@PARAM4 VARCHAR(50
SELECT @param1 = parameter_1,
@param2 = parameter_2,
@param3 = parameter_3,
@param4 = parameter_4
FROM dbo.lookup
WHERE entity = @entity
SET @SQL = '
SELECT '+@messageID+' AS MessageId,
(
SELECT *
FROM [MyDB].[dbo].[tbl_'+@entity+'] v
FOR XML PATH('''+@PARAM1 +'''), ROOT('''+@PARAM2+'''), TYPE
)
FOR XML PATH('''+@PARAM3+'''), ROOT('''+@PARAM4+'''), TYPE
'
EXEC (@SQL)
DECLARE
@SQL VARCHAR(MAX),
@MessageID VARCHAR(50),
@PARAM1 VARCHAR(50),
@PARAM2 VARCHAR(50),
@PARAM3 VARCHAR(50),
@PARAM4 VARCHAR(50
SELECT @param1 = parameter_1,
@param2 = parameter_2,
@param3 = parameter_3,
@param4 = parameter_4
FROM dbo.lookup
WHERE entity = @entity
SET @SQL = '
SELECT '+@messageID+' AS MessageId,
(
SELECT *
FROM [MyDB].[dbo].[tbl_'+@entity+'] v
FOR XML PATH('''+@PARAM1 +'''), ROOT('''+@PARAM2+'''), TYPE
)
FOR XML PATH('''+@PARAM3+'''), ROOT('''+@PARAM4+'''), TYPE
'
EXEC (@SQL)
GO
Now we have one stored proc that can be used to build all the XML files required, and all our logic is in a single place. The procedure accepts an input parameter that indicates what entity is being loaded, and the PATH and ROOT values are then found from the lookup table. Notice also the addition of the MessageID as a field of the outer SELECT FOR XML statement. Playing with nesting one statement inside another in this manner allows a huge amount of flexibility when transforming data into xml format. For more info on the SELECT FOR XML clause, including use of other modes, see the msdn reference here:
http://msdn.microsoft.com/en-us/library/ms178107.aspx
http://msdn.microsoft.com/en-us/library/ms178107.aspx
Subscribe to:
Posts (Atom)