Saturday, April 2, 2011

The Good, The Bad and The Ugly of Metadata Management

Introduction
The simplest definition of Metadata would be 'data that describes other data'. It adds context, meaning and understanding to the data being described. Database\Data warehouse projects often have some form of metadata management associated with them. Metadata management in simple terms would be the continuous process of maintaining metadata (adding, updating, other maintenance activities based on the form of metadata storage). An example of metadata management would be during the data modeling phase where the entity and attribute definitions are maintained by tools such as Erwin, ER studio, etc.
For the scope of this blog I will confine myself to the boundaries of technical metadata discussion (specifically to data dictionary metadata).
Metadata can be broadly classified into three types:
1. Business Metadata: This metadata defines and describes the actual data existing in the database or data warehouse in a business sense. Ex: The column Quantity in table Orders refers to the total number of orders placed by a customer in a day.
2. Technical Metadata: This metadata stores information about the technical aspects of the data for objects like tables, columns, data types, data profiling, etl objects and so on. This typically is called the data dictionary (or System Inventory).
An example of data dictionary metadata would be Microsoft SQL Server’s metadata storage; SQL Server stores data about user objects and system objects (pertaining to SQL Server) in a set of system databases – ‘master’, ‘model’ and ‘msdb’. The master database stores data about all the user objects (which include tables, indexes, constraints, stored procedures, functions, views, etc.) in a set of internal system tables. Ex: sysobjects, syscolumns, sysconstraints and other tables brilliantly exposed by INFORMATION_SCHEMA views. Note: These system tables also store data about themselves.
3. Process Execution Metadata: Data about the actual execution of the ETL processes – like performance statistics, rows transferred, transfer duration, errors\exceptions encountered, logging etc. In a way 3 is also technical metadata, but I like to branch it under a separate category due to the nature of the data.
The Why?
The ‘why’ of this blog is to (try to) understand the impact of purpose & use of the metadata stored in the metadata repository. Also will discuss the reasons for aligning your repository design with the purpose of metadata. Just to be clear I myself am a big supporter of introducing metadata in the overall architecture of a system and also to an extent on metadata driven architectures. The buck stops at the realization point when the effort being invested in “making it work” is more than the “actual work” itself.
Now that we have a fairly good enough understanding of what metadata is and its different forms of storage, lets move on to a case of building metadata solution for a database migration project.
The (Client’s) Treasure
Let us consider a case of designing a new enterprise data warehouse project and also a metadata solution which is an integral part of this effort. (I am relating this to a project that I worked on in the past where I had approximately one thousand DB2 tables which needed a transfer to SQL Server database at a major automobile dealership chain sharing some experience). Situation: Create a daily load from DB2 tables into the new SQL Server data warehouse. Assuming that our solution has the ETL part (SSIS) and also the reporting and analysis piece (SSAS and SSRS) to it, we come up with a metadata repository (a common sql server database) for the logging and auditing framework for the ETL components (process metadata), a key value pair storage structure for column definitions, ETL object definitions, report definitions, etc. (a common structure for business and technical metadata for specific work identified above).
The Good
From the descriptions of the types of metadata being stored the purpose can be inferred as operational statistics\reporting (of scheduled jobs, ETL processes), business definitions of columns to serve as a data dictionary for years to come.
As long as the purpose and the use (at the most granular level) is specifically understood, the pursuit towards that type of metadata storage can be justified.
Now lets assume the creation of a raw or staging area for our warehouse where the table structures are going to be almost the same as that of the source and given that we have the metadata of the source system (table name, column name, data type, length, etc.) we should be able to create a similar structure in our SQL Server warehouse environment by writing some looping programming constructs to create all staging tables dynamically.
Wow, isn't that awesome? Brag time: I was able to create a ‘simple’ script that generated a staging area with whopping 1000 tables plus their keys. Manually doing it is just not feasible. Now let us compare the effort involved in creating the script\process that did the work (less than 4 hrs.) instead of manually creating each table which if you estimate is a lot of $$$ and not being smart (“making it work” vs. “actual work”).
The (Not so) Bad 
The maintenance and constant upkeep of metadata is the ‘not so bad’. The reason being that although it does end up taking time and effort to keep it up to date, but doing so keeps the metadata under check. There will be the occasional exception that will throw the metadata values out of whack an may cause your automated process to fail. Try to fix the metadata, not the process in this case. Example: some of the datatypes being sent by the DB2 metadata dump are mistyped as ‘datetimer’ instead of 'datetime’ (extra ‘r’ at the end causing the automated script to fail). This is a very simple example. Think of other instances where you are creating primary keys and foreign keys from the metadata. Sure, does seem like a good idea and indeed is if the metadata is in a structured format. And yes, the process can also be fixed or restructured to address this; but, a well defined and well know process already exists for something like this and it is called ‘Data Modeling’. This gets into the ‘Ugly part discussed below’.
Remember the purpose of the metadata store – it was just to serve as a data dictionary in the simplest form. The ability to do something extra out of it was a by-product.
The (Definitely) Ugly
The ability to script and generate the staging area was possibly only due to the well structured metadata from the DB2 system, but if it were not, then the effort would be different – spending countless number of hours to get the DB2 metadata in a “perfect” structure which could possibly take more time than the actual work of just creating the staging area manually one table at a time. The perfect structure would be to mimic the storage structures which SQL Server stores its metadata in a very normalized format that validate all the data being stored. This takes you on a quest for finding a complex solution.
Your metadata repository structure does not have to be perfect, it just needs to serve its purpose. If you end up going down the path of making it absolutely right, then this is just over-engineering something simple.
The Summary
Remember that your metadata does not have to be a panacea (it is not, so try not to push too much toward making it one). Define the purpose of a certain piece of metadata, consider the administration\maintenance aspect that will be spent toward it and just drive toward achieving that purpose. There is a thin line between the bad and the ugly. As a continuous practice keep your metadata solution the ‘good’ or the ‘not so bad’ band and your metadata solution will have met its purpose.
The metadata you pursue will give you a lot of information and every time you work with the metadata repository created you will get newer ideas on using it for a number of things (possibly even fueling a rocket, watch out for such instances and prevent yourself from designing a rocket fueling system).
Do NOT let the the metadata implementation & management overshadow the actual system implementation and your immediate goals.
That is all I have for now. Thanks for reading.
Fun fact: The total number of time the word ‘metadata’ appears in this article = 50 :-)

Wednesday, February 2, 2011

Managing Database Code for Continuous Integration (Part 2 of 4)

Introduction
Part 2 (of 4 series) on Continuous Database Integration covers creating, managing and provisioning a database project for a continuous integration environment. I will be using Visual Studio 2010 to create a database project (.dbproj project) and for source control I will be using Subversion (open source). The usage of Visual Studio is mainly for the management of SQL scripts and (most importantly) for the deployment file that it generates, but you can do away with it and have a complete CDBI system setup in an open source environment. Most of the tools selected in this article to set up CI are free.
          If your environment extensively uses Team Foundation Server (instead of Subversion), and Application Lifecycle Management (ALM), you should look into setting up continuous integration with these tools before starting to look for free ones. If you are an open source shop (two thumbs up) or want to try out standing a CI environment on your own, read on.

This article is focused on the use of the SQL Server database project type in Visual Studio 2010. The SQL Server project type has ‘build’ and ‘deploy’ capabilities (along with a bunch of other features), which is the pivotal component of the CI setup. If you do not want to use VS 2010 Database Projects, and instead use a file system management based structure of hosting the database scripts; then, you need to manually create the re-runnable\deployable script (an OSQL or SQLCMD command script that executes all your deployable .sql scripts). Make sure you test this against a local instance of your database to emulate the build and deploy features. I created a custom C# console application that looks into specific folders (Tables, Stored Procedures etc.) and creates a command line batch script with error handling embedded in it (a custom deploy file generator was created as my project involved both oracle and sql server and I was working with Visual Studio 2005 .dbp project type), but with VS 2010 you get all these benefits plus the database testing, refactoring and it just plainly makes managing a database project much simpler (.dbproj project type).
Before we begin on the CI setup of the database project, download the following tools to set up the environment:
1.    Subversion – For source control of the database project. This is by far the best and most comfortable source control system I have worked with (Sorry TFS). After installing subversion and creating a repository, make a note of the repository location as you will need it to link your database project to that.
2.    Plugin for Subversion integration with Visual studio IDE (either one of the two)
       a. Visual SVN (Free to try, $49 per license)
       b. Ankh SVN (Free)

Visual SVN vs. Ankh
: I would recommend Visual SVN to a database centric shop that has heavy duty database development, SSIS, Reporting and\or SSAS solutions. I have had problems with Ankh SVN plugin to work correctly with these project types. It does not recognize these project types from the IDE and you end up managing them from the windows explorer instead of the commit\revert operations from the IDE. Visual SVN is much simpler to use, works perfectly with all types of project types that a database developer needs to work with. Yes, it does come with a price tag, but a license of $49 is dirt cheap. This was when I was working with Visual Studio 2005 integration. Things may have changed with Ankh since then, try it out and see what works best for your scenario.

3.    SQL Server SSMS Tools Pack: This is more of a helper plugin than a requirement. Helps you generate your seed data, save custom snippets as hotkeys, CRUD generator and more. Once you start using it you will want to get more. Download it here.

Database Project Setup
Once the prerequisite software is installed, open Visual Studio 2010 and create a new SQL Server 2008 project.

Database Project Type

For demo purposes I am creating (reverse engineering by importing an existing database during project setup wizard) a database project for AdventureWorks SQL Server 2008 database [If AdventureWorks sample database is not installed on your database server, it can be downloaded from CodePlex]. Complete the project setup by following the necessary steps as per your database configuration. Leaving them in their default settings is also fine for the moment.
Once the project setup wizard completes, the solution explorer should resemble the fig. below. Right click on the project name “AdventureWorks” and select Properties to bring up the project settings. Click on the ‘Build’ option to view the location of the deployment script. Select the ‘Deploy’ tab on the left to view the deployment settings.

AdventureWorks Deployment Options

Now that we have the project ready, Right click the project and select ‘Build’. The status bar should go from ‘Build Started’ to ‘Build Succeeded’ status. After the Build succeeds, deploy the project by Right clicking the project and selecting ‘Deploy’. This will create a deployment script named ‘AdventureWorks.sql’ (in the Visual Studio\Projects\YourProjectFolder \sql\debug). With the database project deployment, two options are available (for now leave it in its default state: 1).

1.    Create a deployment script (default)
2.    Create a deployment script and run it against a database.

Location of AdventureWorks.sql deployment script

The deployment script is the most important artifact for a successful CI system setup. This is a compilation of all the database objects belonging to your database project (including seed scripts, security etc.). After first time deployments, when a change is made to the database project, a script with the same name will be generated which will include the changes.
The next step is to add your project to Subversion source control. To version control your project, right click on the project and select ‘Add solution to subversion’. Select the repository path and add the project. And finally right click and Add files and then commit\check-in the solution. The project is now ready to be shared by anyone who has the correct setup as listed earlier.
 

Preparing artifacts for CI
An isolated database instance of SQL Server needs to be provisioned for continuous build and deploy of database scripts (tear-down and reinstall). This database should not accessible to developers to use for development or testing purposes. The sole reason for the existence of this database is to test the continuous deployment of a database on either code commits or regular intervals of time. This also serves as a sanity check of your end product at any point in time.
Visual Studio 2010 database project provides the tear-down and install script (tear-down = recreate database) on the right click and select deploy action. But, in a continuous integration environment we would like to have this file created automatically on every build scenario. This can be implemented using the VSDBCMD command. You could use VSDBCMD for just creating the re-runnable deployable script (with the /dd- command line option) or use it for creating and running the deployable script (with /dd+ option at command line). If you plan to use it just for creating the re-runnable deployable script, then the script needs to be executed by either ‘sqlcmd’ or ‘OSQL’ in any environment separately.

    Ideally, I would prefer using VSDBCMD just to create the deployment script and then handover the deployment script to the DBA specifying the parameters (documenting them in an implementation plan of the database). The DBAs are familiar with sqlcmd\OSQL than VSDBCMD, plus using the VSDBCMD to execute the deployment script requires a bunch of assemblies (dll files) to be copied on the database server. I am not sure as to how the production DBA of today will accept this change. Thinking on the likes of a developer; sure, VSDBCMD is cool and you should definitely use in qa and production environments. But, in the real world scenario DBAs run the show. By just creating the deployable file in development and then running the same on QA and Production using sqlcmd you standardize your deployments and make your deployments simpler and worry free. (Did I mention that VSDBCMD also requires a registry change on the machine if Visual Studio is not installed on the machine, which is the database server?).
Not always a smooth ride; enter the obstacle: the hardcoded variables in the deployment file.

Hardcoded variables in Visual Studio deployment file
: Visual Studio deploy process creates three default parameters in the deployment file: DatabaseName, DefaultDataPath and DefaultLogPath. The ability to edit\override them is what makes the discussion of SqlCmd vs VSDBCMD interesting.
The main advantage with SqlCmd over VSDBCMD is the ability to pass variables as parameters to the deployment script from the command line. This is a big advantage as the VS DB project hardcodes the database name, data and log file path (mdf and ldf) in the deployment script (AdventureWorks.sql, see setvar commands below) and although there is a way to get around it, it is painful. 


:setvar DatabaseName "AdventureWorks"
:setvar DefaultDataPath "C:\Program Files\...\DATA\"
:setvar DefaultLogPath "C:\Program Files\...\DATA\"

Note: The above variables can be suppressed by editing the project deployment configurations. (This option can be used at runtime via command line params also).


At this point you have two options with sqlcmd: manually changing the DatabaseName and DefaultDataPath and DefaultLogPath variables in the deployment file, or use option two i.e. changing the variables on command line with ‘SqlCmd’ using the “-v” flag for variables.
Ex: sqlcmd –S -d master -v DatabaseName=“NewAdventureWorks” DefaultDataPath=“C:\Data\” DefaultLogPath=“C:\Data\”

If you decide to go with VSDBCMD for creating the deployment file and deploying to the database server, a workaround is required to make this work. Complete the following workaround steps (skip both steps if you are going to go with sqlcmd for qa & production deployments):
1.    Override the DatabaseName at runtime with the TargetDatabase command line option. Ex: /p:TargetDatabase="NewAdventureWorks". This will override the :setvar DatabaseName "AdventureWorks" to “NewAdventureWorks”.

2.    Overriding file path variables: Let’s get something straight first – ‘The variables DefaultDataPath and DefaultLogPath cannot be overwritten’. Microsoft has received requests for this and is planning to allow for overwriting in the next release of database projects. For now we will have to do with a workaround.

a. Right click on project ‘AdventureWorks’ and select ‘Deploy’. Edit the Sql command variables file by clicking the Edit button.



b. Add two additional variables ‘myDataPath’ and ‘myLogPath’ as shown below.


c.    In the database project, navigate to Schema Objects \Storage\Files and change the data file path variable in the AdventureWorks_Data.sqlfile.sql and the log file path variable in AdventureWorks_Log.sqlfile.sql to reference the newly created command variables.
-    Rename $(DefaultDataPath) to $(myDataPath)
-    Rename $(DefaultLogPath) to $(myLogPath)


d.    Right click and Build the project. Navigate to .\AdventureWorks\sql\debug (location of your project) and open the AdventureWorks_Database.sqlcmdvars with Notepad. The new variables will be available to change in here.


As you can observe from steps 1 & 2 above, the workaround for using VSDBCMD can be a bit painful. One other important thing to keep in mind is that VSDBCMD does not execute pre-prepared deployment files. This is also an item that the MS team is considering to change in the next iteration. To create a deployment package VSDBCMD needs the necessary assemblies, build files, manifest, sqlcommandvars file etc. to prepare the end product (deployment file) and run it. On the other hand sqlcmd is easier to run pre-prepared deployment files (like AdventureWorks.sql).

Creating the build package (build files) & executing the deployment output
For now, I am going to demonstrate creating the deployable file with VSDBCMD (minus steps 1 & 2 above) and deploying them on different environments with sqlcmd instead of using VSDBCMD.

The workflow of the continuous builds and deployments that we are trying to emulate is:
a. Clear existing deployable file: In the \sql\debug folder, delete the file AdventureWorks.sql.

b. Build the project: For now just right click and select “Build”. I will be using MSBuild to perform this task in the next article. Behind the scenes, when you right click and build, Visual Studio uses MSBuild for the build process to create the build files in \sql\debug folder.

c. Generate deployable file AdventureWorks.sql: Using the VSDBCMD command line tool on the integration machine and the database manifest file (AdventureWorks.deploymanifest) to generate the deployment file “AdventureWorks.sql”

Before trying out this step, make sure VSDBCMD is installed on your integration machine.
-    If Visual Studio is not already installed, then follow the instructions here to download and install VSDBCMD.
-    If Visual Studio is already installed (VSDBCMD is located in “C:\Program Files\Microsoft Visual Studio 10.0\VSTSDB\Deploy”), make a reference to it by adding it to your PATH variable in environment variables (instead of copying the files).

Execute the following command in the \debug\sql folder from command prompt: 

VSDBCMD /dd:- /a:Deploy /manifest: AdventureWorks.deploymanifest


The /dd- ensures that a deployment script ‘AdventureWorks.sql’ is generated and is not executed against the database server.

d.    The output of step above (AdventureWorks.sql) is executed against the integration database instance. Execute AdventureWorks.sql using ‘sqlcmd’ to test out the deployment.

Sqlcmd –S -d -E –i -o -v DatabaseName=“DefaultDataPath=“DefaultLogPath=“

Ex: SqlCmd –S myServer -d master -E –i C:\AdventureWorks.sql -o C:\LogOutput.txt -v DatabaseName=“NewAdventureWorks” DefaultDataPath=“C:\Data\” DefaultLogPath=“C:\Data\”

Note: Variables set with –v parameters overwrite hardcoded variables set in the deployment script.

Summary
This completes our preparation of build items needed for setting up the database project for Continuous Integration. Not to worry, the steps above are for understanding the working knowledge of how the CI product (Hudson) is going to orchestrate the above steps on the server for us in the next part of this series. As far as the options go with selecting VSDBCMD for build or using it for both build and deploy, it depends on your environment. If you are flexible enough knowing the changes you have to accommodate to get VSDBCMD working in your environment, then go for it, otherwise just use it for build purposes on the build server to create the deployable product and use that going into the next environment phases (QA and Production).

The next part of the series deals with orchestrating the steps a through d above in a repetitive manner based on either code commit\check-in or regular intervals of time. This will be set up using free tools - Hudson and NAnt. The next article will explain in detail setting up Hudson as a CI server for Database projects and configuring NAnt tasks for the actual implementation. I will also take some time to discuss a proactive vs. a reactive CI setup and how that affects development. That’s all I have for now. Thanks for reading.

Wednesday, October 6, 2010

Agile Data warehouse Planning & Implementation with Hudson, NANT, Subversion and Visual Studio Database Projects (Part 1 of 4)

Overview
The notion of managing data warehouse projects with continuous integration with open source technologies is an uncommon practice or i guess is just unpopular in IT shops dealing with database code, SSIS and SSAS projects (from my experience). Excuses\opinions differed from company to company:

• “It doesn’t apply to database code projects”
• “What is Continuous Integration and how does it apply to data warehouse projects?”
• “Here at Acme Inc. change control is done by our architect\DBA who uses a tool called ‘AcmeErwin’ or AcmeVisio to generate code, so we don’t need the additional bells and whistles”
• “Automating testing & deployment for database projects, SSIS packages is not possible”
• “Is it worth the effort?”
• “We are special, we do things differently & we don’t like you.” – kidding about this one.

In this article I will try to justify the use of CI on data warehouse projects and try to address the concerns above. The subject matter of this article is geared towards planning and implementing data warehouse projects with agile development practices on the lines of iterative feature\perspective driven development. (Perspective = Subject Area = Star Schema) The article begins with an introduction to agile development practices, reviewing evolutionary database design, defining continuous integration in the context of database development, comparing viewpoints of waterfall and JAD methodologies to Agile, and demonstrating the coupling of the Kimball approach with Agile to establish a framework of planning long term project milestones comprised of short term visible deliverables for a data mart\warehouse project. I will do a detailed walkthrough of setting up a sample database project with the technologies (VS Database Projects for managing code, Hudson for Continuous database integration, NANT for configuring builds, subversion for source control and OSQL for executing command line SQL) is included for demonstration .

Due to the verbose nature of my take on this, I am writing this article as a 4 part series. Trust me; the next 3 parts are going to be hands on cool stuff.
Part 1 – An introduction to agile data warehouse planning & development and an introduction to Continuous Database Integration (CDBI).
Part 2 – Create the database project with Visual Studio Database Projects & Subversion.
Part 3 – Prepare the build machine\environment with Hudson and NANT
Part 4 – Making the medley work. Proof is in the pudding.

Introduction
After shifting gears on different approaches to database development at various client sites and compiling the lessons learnt, I am close to applying a standardized methodology for the database development and management. One can apply this approach on projects regardless of size and complexity owing to its proven success.

Before starting the introduction to Continuous Integration and Agile, let me take a step back and give you a lesson learnt working with the waterfall model. While working on a new data warehouse project and adopting the waterfall SDLC approach for database project planning and implementation, over time, the implementation plan did not follow the estimated planning. Sure, it was only an estimate, but you don’t want these estimates changing forever. Here the implementation was almost always off track when compared to the initial plan. This observation was initially not visible during the initial planning phase, but over time the mismatch was more evident during the development cycle. The mismatch was due to ‘change’. These were changes in requirements or caused due external factors. When you approach a data warehouse development methodology it is either the Kimball approach or the Inmon approach; and for my project it started off with the waterfall + Kimball. But, due to the nature of the requirements from the business where changes were too frequent, the waterfall was proving to be a showstopper. The reaction to changes and turnaround time required by the development team was slowing down the project timeline. The requirements were changing plus this project was already a mammoth effort with more than ten subject areas with conformed dimensions to form a data mart.

The old school approach on starting a new database project (with the waterfall cycle) begins with initial requirements and then comes in the logical model and then the physical model. Usually In this approach you have an ‘architect’ or a ‘data modeler’ or an application DBA on the project who owns the schema and is responsible for making changes from inception to maturity. This methodology is almost perfect and everyone starts posting those ER diagrams on their walls and showing off, until Wham! Requirements start changing and for every change you need to go back, change the specification, the schema and then the actual code behind, and of course the time for testing. As the frequency of these changes goes up, this catapults the delivery dates and changes your project plan. In this approach, the turnaround time for delivering an end product with the changes identified is just not feasible. I am not debating that using a waterfall approach will determine success or failure; I am trying to juxtapose the effort involved (basically showing you the time spent in the spiral turnover of the waterfall model then will compare it with an agile approach). This is a classic example of how the traditional waterfall approach hinders the planning and implementation of your project.

This called for a need for a change in the development approach, one that could react quickly to any change that could affect the time-line of deliverables. The new approach adopted was an agile development practice + the Kimball method which resulted in a successful implantation of a large scale data mart for a health care company. By now you should have an idea on what I am trying to sell here.

What is Continuous Integration?
Continuous Integration is all about automating the activities involved in releasing a feature\component of software and be able to simulate the process in a repeatable manner to reduce manual intervention and thus improving quality of the product being built. This set of repeatable steps typically involves running builds (compiling source code), unit testing, integration testing, static code analysis, deploying code, analyzing code metrics (quality of code, frequency of errors) etc.
Continuous Integration for database development is the ability to build a database project (a set of files that make up your database) in a repeatable manner such that the repeatable action mimics the deployment of your database code. Depending on the database development structure database projects can successfully be set up to run scheduled builds, automate unit testing, start jobs and deploy code to different environments or stage it for deployment to reduce human error that a continuous and monotonous process can bring.

The CDBI (Continuous Database Integration) Environment
The best part about the tools I am going to set up the continuous integration environment is that they are all FREE. Almost all are free except Visual Studio Team Edition for Database Professionals. I would highly recommend using VS Team Edition for Database Professionals as a tool for developing and managing code when working with database projects. The others tools that I used for continuous integration is Hudson, Subversion and NANT. Yes, freeware used mostly by the othe other community. But after applying them side by side with MS technologies, that proved to be a good mix.

All of the above opinions when drilled down, point to the concept of ‘done’ or to ‘a deliverable’. The granularity of the deliverable is pivotal to incremental software development. It is just a matter of perspective – for incremental software development the granularity of your deliverable is much smaller and the visibility is much clear and concise when compared to a deliverable on the waterfall track. On the continuous sprints you know for sure what needs to be delivered by the next sprint\iteration. At this point you have a definition of ‘done’. This is the most important thing when we start getting into agile development practices – The concept of done. [TechEd Thanks]
The more tasks are granular, easier they become to control and complete. Once you start slacking on a few, they then to pile up and that happens on a larger scale, they fog the plan even worse. Once you step in to the shoes of a project planner and also of a lead, this gap will become more evident and clear.

This is where CI helps in meeting deadlines, showing progress of work in regular sprints where the previous sprint progress is evaluated (to validate the concept of done) and requirement for the next sprint is defined.

Summary
To summarize, Continuous Integration in a DB environment is all about developing your database code in sprints (of two weeks or more, your choice), by a feature or perspective. Ex: A feature in the AdventureWorks database would be HR module or the Sales module. An example in a data warehouse environment could be the Inventory star schema. It is these short sprints (regular intervals of feature completion or deliverable, usually 2 weeks) of clear quantifiable requirements (definition of ‘done’) that helps gauge the status of work. Once the developers adapt to this rapid SDLC, visibility into the progress of the work goes up, results in accountability and ownership of work, building a more cohesive team and increased productivity (I can bet on this one) and most important of all, a quality product being delivered in chunks to form the big picture. The big picture being a collection of perspectives(start schemas) that plug together to form a data warehouse. That is all I have for now, more to follow in my part 2, 3 and 4 on setting up the CI environment with a database project, using Subversion as source control system and NANT for creating build files.

Thanks for reading & stay tuned ….

Vishal Gamji

Tuesday, June 29, 2010

Deploying SSIS Packages with XML Configurations

SSIS trivia: What do admins detest most about SSIS deployment?
Answer: The Environment Variable configuration.
One the features included by Microsoft for flexibility & ease of deployment between environments. If your answer was registry files, you were close. Environment Variables is on top (Registry Files comes in second).

The Nutshell Overview: This article is all about using XML configurations in SSIS, direct vs. indirect configurations, pros and cons and explaining the deployment issues and concerns with these types of configurations. For the purpose of this article, when relating to database connection strings I am assuming that AD authentication is used all over. SQL authentication is out of scope, for now (otherwise I will have geeks rioting). This article is also me doing some ground work for my next article.

XML Configurations: XML configurations in SSIS can be used in one of two ways
1. Direct Configuration: This configuration setting takes the form of a hardcoded path in the package itself and changing this between different environments when moving packages to QA or Production; it is the responsibility of the deployer to rely on the manifest (explained below) or changing the dtsx manually. When this type of configuration is used, it becomes a mandate that the path of the XML Configuration file is going to remain the same for all developers (on their machines) working on the project.
For example the path of the XML config is C:\myXMLConfig.dtsconfig for say 10 packages in a solution. Now, when another developer joins the merry band, the new developer has to ensure that the configuration file used is in the same path set by the previous developer. If the new developer places his configuration file in a different location, say something like “D:\myXMLConfig.dtsconfig”, then the previous developer’s development environment will not load correctly (Assuming both developers are working on gold code and checking out and committing to the same source repository).



Fig 1: Direct XML Configuration

The SSIS Manifest:
The SSIS Manifest file is a setup file, similar to the Installers that you run for software products which on the click of the next-next-finish complete your software installation. The manifest file when opened launches a wizard where you can specify the new location for the configuration and change configurations and also the new location of the packages when deploying to different environments. From what I understood from the working of the manifest; it basically does two important tasks (please comment if I missed any):

a. Copying the SSIS packages between environments (Same as using XCOPY or DTUtil. Bet DTUtil uses XCOPY behind the scenes)

b. Change the XML configuration file path (a simple Find-Replace based on the XML node find on Configuration Type =5 or 1. Open the SSIS package in a text editor to view the SSIS package XML code for configuration types)

i. ConfigurationType =5: Indirect Configuration (described below)
ii. ConfigurationType =1: Direct (hard-coded) XML file path configuration.

2. Indirect configuration: Indirect configurations in SSIS packages allow you to reference a configuration file to a virtual name i.e. to an environment variable. This means that the SSIS configuration value that is embedded in the package(s) is an environment variable (key\name), the value of which is the actual path of the XML configuration file. When this kind of a setting is used for configurations, during deployment time, only the environment variable need to be added to the environments where the packages are deployed.

Fig 2.1: Indirect XML Configuration


Fig 2.2: Environment variable configuration

There are issues where implementing this type of configuration setting in some environments. Two main issues:

a. Machine reboot on environment variable add\modify: When an environment variable is added, in order for it to take effect, i.e. in order for the packages to start recognizing it, the machine needs to be restarted. This is not true. Same holds true when modifying the environment variable. There have been numerous questions and concerns on this topic and since an obvious panic button is hit with the word ‘reboot’, people tend to stay away from this setting. The reality check is that the process which is running the SSIS packages needs a restart. Typically this is the SQL agent, which is the scheduler for the packages. In a development environment the BIDS needs a restart. This is one of the most important concerns a DBA has with SSIS configuration.

b. Development & QA on same server: This issue crops up when DBAs provision development and QA instances of SQL Server on the same machine. In such environments it becomes impossible to implement indirect configuration settings with environment variables as one environment variable cannot be used for both environments development and QA.

Other than the obstacle (b) above, the environment variable coupled with XML configurations is the simplest and most flexible way of deploying SSIS packages between environments (again, going back to the assumption of AD authentication). I have gone down the path of trying to think of different ways of convincing the admins for implementing environment variables at various companies; in some cases I was able to sell my pain, in other cases had to walk away with explanations on the ‘complexity’ of ‘maintaining’ environment variables and it was back to the drawing board to find another solution.

As a developer, I may not be totally in tune with the ‘complexity’, ‘security breaches’ and the ‘challenges’ (stress on quotes) involved in maintaining environment variables, but really?, after all the security ‘gizmos’ an organization has in place to stop Intrusions, inc. one would really ponder on the need for an overdose of security, but, such is life … and I don’t blame the admins for doing their job (maybe I would have done the same thing if I were a production DBA). DBAs would want to minimize the loopholes, build better security practices and keep the servers clean of environment variables and registry files; and on the other hand a smart developer wants to incorporate similar best practices, coding standards, ease of deployments between environments, etc. and truly it is difficult to see green on both sides of the fence.

In the domain of deployment of packages between environments; the MS Integration Services team did a good job making the XML configurations, database configurations etc. available to us in SSIS, using which the migration of code between development, QA and production has become simple and efficient. Along those lines, they have given us the (risky) capability of saving sensitive information as clear text in XML configurations. They do have a neat way of letting us save the sensitive information and we take the risk of saving such information, i.e. the packages themselves strip the passwords and anything marked sensitive. This goes back to deployment modes msdb and file system which I will not delve into.

If one would think of SSIS configurations as a bane; try implementing continuous integration and deployment of SSIS projects\packages without configurations, and trust me; if you have not already, you will feel the pain (been there, felt that).

Summary:
At the end it comes to selling your pain in that meeting room with quasi-knowledgeable management, the so called technical managers and DBAs over security &maintainability vs. flexibility & ease of implementation. You know which scale weighs heavier, don’t you? It’s the DBA side of the scale, i.e. the security part of it. But, you don’t see me frowning, coz someone once told me “A problem\setback is an opportunity in disguise” (or something like that). In my next article I will demonstrate how I really did find opportunity in this obstacle (hint: has something to do with continuous integration with SSIS & C# and the next paragraph).

Personally, i like the database configurations and in my implementations of SSIS projects. I give it a mix of XML configuration (The road less travelled: indirect in development environment, convert it to direct in QA & Production.) and database configuration elements to it. This keeps the deployers\admins at bay and gives me more power as a developer to enforce configurations at the database level for my application.
More about the road less travelled in my next article.
Sometimes I wonder if I should start another blog - SqlDeveloperRants.blogspot.com.

Wednesday, April 15, 2009

Enumerating reporting services metadata with C# and SQL Server 2005

Who should read this article Report writers, database developers, data modelers, anyone familiar with SQL Server 2005 & 2008 Reporting Services (familiarity with Report Models, Report Designer and Report Builder) and lastly with some knowledge of programming with C# and XML.

Introduction

Why should report model metadata be stored (in a metadata repository)? This could be a question you are asking yourself or you already know the answer and are looking for additional silver bullets to help explain it to your manager or project sponsor.
One of the more compelling reasons for capturing metadata about entities and attributes (and to an extent – roles & relationships) in reporting services report models in any organization is ‘documentation’. May that be client documentation or in-house documentation to serve as a reference guide for business users, or as a technical reference for ETL developers, or for that new developer who just joined you organization and wanted to know the definition or description of a business term, or more importantly as a quick reference for report writers (who may not be familiar with the field names used in the report model). Whichever the use; the bottom line lies in the term ‘Metadata Management’ (not to be confused with Master Data Management – an entirely different topic). The objective of this article is to extract from the smdl file using C#, meaningful and necessary metadata and store it in a SQL Server 2005 database.

Report Authors
I separated the report writers based on method of developing reports:
1. Using Report Designer (from Visual Studio or BIDS) for canned\ known reports - Technical Report writers
Technical Report writers are usually report developers who are familiar with the data model and may\may not know much about business processes.
2. Using Report Builder (spawning from the browser) for ad-hoc reports - Ad-hoc report writers.
Ad-hoc report writers are usually business users (end users working on the application or management folks).

This blog is more inclined towards the ad-hoc report writer community.

If you are interested in knowing more about metadata and metadata driven architectures you will have to wait until my next article or just google ‘Metadata driven architecture’ and ‘Microsoft Project Real’. Microsoft’s Project Real has a good implementation of metadata (open source) for enumerating and storing metadata in the database for SSIS, SSAS and for database structures, but lacks a module for enumerating Reporting Services models.

Report Models
A Report model is a slice of information of the database structure (views and\or tables) that relates to a specific business scope. For example, in an automobile dealership data warehouse one of the report models would be a Sales model which links back to an inventory table and a date time dimension. Report models are created with a ‘data source view’ reference. Think of a ‘Data Source View’ as a visual representation of a database diagram with tables or views having keys and relationships. Report models are created with the end user perspective as they use report models as their source to build ad-hoc reports.

Note: Report models have a file extension of smdl. The content of these ‘smdl’ files is a set of XML nodes describing the model (They contain entities, attributes, rules, roles, relationships, etc).


Creating the metadata repository for Report Models
The report model repository can be created as a separate schema with a set of tables or be a part of your own custom metadata solution; or feel free to use the scripts and schema I provide in this article. Once your schema is created; two stored procedures need to be created for inserting into the Entity table and the Attribute table. You could create simple CRUD type procedures that do a one row insert at a time, but a more efficient solution would be to create OPENXML stored procedures to insert all the data in one call. In my example I used the latter. By using the OPENXML stored procedure that I reduced the number of database inserts calls from hundreds to one. I have provided the code for the OPENXML stored procedures at the end of the article.








Figure 1: Report Model schema.

Now that your schema and stored procedures (“AddAttribute” and “AddEntity”) are created (and tested), you can begin with the application code.
At this point - Make a copy of the Report Model file from which metadata needs to be extracted from.

Extracting data from the Report Model
(.smdl): There are various ways of retrieving this report model data from the smdl file; like using XSLT and XPATH in C# to get to the desired nodes; or store XML chunks of the model file as XML data type in SQL Server and retrieve it based on XQUERY. Any way can be adopted; i will demonstrate one 'simple' way of doing it. You could apply a similar extraction and storage technique for report files (rdl files).


For my solution i used a C# windows application solution in Visual Studio. You could even create a console app instead of windows. I created a windows app so that i could have the feature of browsing for .smdl files without having to type it in.
Basically, the extract from smdl (XML) files was done by using XMLTextReader class provided in the System.XML namespace. I parsed the entire XML file for tags (XMLNodeType), in a specific format hierarchically based on the XML node structure (Entity --> Attribute --> Attribute Properties).

Each line in the XML supports these tags to identify the item associated with it. Once a type (XMLNodeType) name "Attribute" is found, i look for additional properties of the nodes with tags "Name", "DataType", "Nullable", "Format", "SortDirection", "ColumnName", "Description" etc. and start building my XML string structure.
A few functionalities provided by the XMLNodeType (and how they were used):
1. XmlNodeType.Element: Used for element level parsing.

2. XmlNodeType.Text: Text content of a node to retrieve description of a column, etc.
3. XmlNodeType.EndElement: To find the exit point of an element (entity, attribute, or anything else)


An important thing to note here is that i am NOT retrieving values based on keys and inserting to the metadata store. I am actually building a well formatted valid XML string that will contain the key-value mapping structure so that it can be passed to the SaveAttribute and SaveEntity methods (OPENXML stored procedures).
Once this XML string is created it is passed to the methods SaveAttribute (executes stored proc AddAttribute) and SaveEntity (executes stored proc AddEntity). Both these methods execute with one call each to the database. The relationship between an attribute and its parent entity is also saved to the database table based on GUIDs. Following is a sample of the data from the metadata repository.






Figure 2: Sample data view.

Figure 3: Sample OPENXML stored procedure content (where @AttributeXML is the XML string created by the C# code):























Summary

The idea behind this article is to give the community another way of retrieving metadata from report models. This would be a good addition to the existing BI metadata repositories that companies currently support. One really has to understand and harness the power of metadata. There are people who do not support the idea behind metadata and its use. Think of a simple scenario (a real world project) of metadata use - A project for migrating around 1000 tables with millions of rows transactional data from AS400 i-series to SQL Server 2008 using SSIS, so, were you going to create source, staging and warehouse structures manually? I don't think so, Ideally you would retrieve all the metadata from the system catalog of AS400 and write dynamic SQL scripts to create all the structures.
Long story short - utilize metadata to the fullest extent, maintain a store of the metadata for business, technical and ETL process meta data and incorporate it during the earlier phases of your BI projects. Hope this was helpful. Do leave feedback.

Download code: Download Code here

Thanks,

Vishal Gamji

MCITP - Database Developer

admin@expertsqltraining.com

Saturday, March 14, 2009

SSIS Execute SQL Task failure

Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.


Every so often ETL developers working with the Execute SQL Task in SSIS encounter the error above. I have seen a few developers who try to 'quick-fix' this by changing type mappings without exactly knowing the differences between them and re-running the ETL task only to get to the next red light. The best example of this would be the type mismatch of Long and Numeric types when using the native OLEDB provider. I wouldn't be surprised if there are developers out there looking up the precision error on the types. I will not delve too deep into all the providers, but will provide a reference to a very good resource that Microsoft has provided, and which I think should be on every ETL developer's desk.


In this (first) blog I will point out the most common places in the Execute SQL Task where you should troubleshoot the error (above). Will also go over some SQL provider mumbo jumbo at a high level.

Troubleshooting steps (order need not be followed):


1. Verify that for the stored procedure\SQL statement used, the parameter counts and direction (input, output or return value) is set appropriately.

If using a OLEDB or ODBC provider check if your number of "?" (parameters), and if using ADO or ADO.NET provider, check the number of @ are equal to the number of parameters being mapped to on the parameter mapping screen.


2. Verify your data types.

Verify that the data types that exist\declared in the stored procedure or SQL are mapped to a compatible data type in the parameter mapping. For example: when using the OLEDB provider, map the integer (int) parameters to Long data type.


Providers: Which data provider should be used when choosing between (Managed) ADO.NET or (Native) OLEDB? - There are n number of websites and blogs (yes, this is also another one of them) which will tell you why you should prefer the use of OLEDB provider over ADO.NET, as ADO.NET is a managed provider which adds another layer code to connect to the data source, thus making ADO.NET slower than OLEDB. If you really want to find the exact runtime exection difference I would suggest running profiler with the two connection managers and then comparing them.


Helpful links for data type mapping and provider info:

- http://msdn.microsoft.com/en-us/library/aa198346(SQL.80).aspx

- http://msdn.microsoft.com/en-us/library/aa263420.aspx

- http://www.carlprothman.net/Default.aspx?tabid=97#10


3. Verify the ResultSet property:

Verify that the ResultSet property is set appropriately for the SQL command being executed. For example: if the SQL command\stored procedure returns a full result set, set it to the 'Object' data type, so that you can retrieve from the object based on the index position of the result (0 = first column, 1 = second column and so on) as members of the 'Object' type are ordinal. Also make sure that all columns in the sql result set have column names.


4. Do you trust your SQL Command\Stored procedure, there is a very good possibilty if it being incorrect:

The execute SQL task can do nothing about bad\erroneous SQL code given to it for execution but fail it. As a routine, test the SQL for all possible exceptions, giving more importance to situations where a value or a list of values is expected and nothing is returned.


For example: the sub query inside the stored procedure may be returning multiple values, NULL\invalid or no values at all. Watch out for the tricky no value situation.

Example: In this example, our objective is to retrieve object_id from sys.objects where the name meets a certain condition. The 'ResultSet' property of the execute sql task is being set to "Single Row" (as our objective is to retrieve a scalar value).


[Note: Each Case builds on the previous cases.]


Case 1: Simple Select to assign to a variable


SELECT object_id FROM sys.objects WHERE name = 'sysrowsets';


If the result set of the execute sql task is set to "Single Row", it will work only when the WHERE condition is satisfied. If it is not satisfied, it does not return NULL, instead returns nothing, i.e. an empty result set (see screenshot below) - which will cause the task to fail giving us the same error "ResultSet property not set correctly". [A developer must keep this exception in mind when testing the execute sql task]. This query will also raise an exception when multiple values are returned by the query and our task is to assign a single value to a variable from the output of our query.


Case 2: Handle multiple values.


DECLARE @Object_ID int;


SET @Object_ID = (SELECT TOP 1 object_id FROM sys.objects WHERE name = 'sysrowsets');


In this case, we handled the multiple values problem from Case1. Now, if the result set returns anything other than a NULL value or a Non-Empty value, we are close to living in a perfect world, but, as it turns out (someone told me) that we don't. If\When a NULL or an empty result is returned by the above query an exception will be thrown. Yes; it is the same exception we are discussing in this article.


I am more inclined on the 'When' it will happen than the 'if' possibility because at-least in the scenarios I have worked with, it was bound to happen, then again, your case may be different.


Case 3: Split the simple select into 'simpler' select.


DECLARE @Object_ID int;


SET @Object_ID = (SELECT TOP 1 object_id FROM sys.objects WHERE name = 'sysrowsets_Bogus');


//Comments-The _Bogus makes the query to return an empty result set.


SELECT ISNULL(@Object_ID, 0) AS 'Object_ID';


The first Select statement here takes care of multiple values; the second Select statement handles any NULL values and sets it to 0 and also solves the 'empty' result set problem when the @Object_ID is selected separately in a different query (as the 2nd query).


Summary

To summarize, double check your SQL queries\Stored procedures before you execute them in the ExecuteSQLTask. Don't jump to hasty conclusions about the error message description Integration Services provides you. For instance if you get any 'Arithabort error' [Ex: Update failed because the following SET options have incorrect settings: 'ARITHABORT'] on a stored procedure that performs an insert\update\delete - check the database compatibility level (sp_dbcmptlevel) which maybe a SQL Server 2000 database with setting 80 and needs to be upgraded to compatibility level 90 of SQL Server 2005, or verify if any columns are computed columns performing calculations which may be throwing arithabort errors. Check for any external factors that are affecting your ExecuteSQLTask variables, for instance package configurations or dynamic expressions. Performing these checks will surely save you some time in the long run and also make your packages robust.


Hope this article\blog was helpful. Do leave a feedback (as this is my first blog). My next blog is about enumerating Reporting Services metadata from models into a database repository. Hope to get it out soon.


Thanks,

Vishal Gamji

MCITP - Database Developer

admin@expertsqltraining.com