Search

Wednesday, November 05, 2008

SQL Server Scheduled Backups with NAnt

When installing and using production web applications, backup is always a must. I have written about SQL Server Backup before and this post would expend on that. The main goal here is to create a solution that periodically backs up your database. To do this, we’ll use the backup script from the previous post, employ NAnt to execute the script and zip the created backup. Last, we’ll schedule the execution of the NAnt build script through the Windows Task Scheduler or the “at” command. This procedure can backup a local or a remote SQL Server (as long as the remote server is on the same network).

Prerequisites

  • Installed and configured version of NAnt (see Getting Started with NAnt - .NET Build Tool)
  • Installed NAntContrib tasks (for the SQL task, see NAntContrib on SourceForge)
  • Task Scheduler Service enabled in Control Panel/Admin Tools/Services
  • SQL login with “dbo” rights to the database (to install the backupDatabase procedure)
  • SQL login that has “public” rights to the database (so it can execute the backup)

Backup SQL Procedure

The script below with create the SQL server stored procedure to create a backup file of a given database. It takes two parameters:

  1. databaseName – The SQL server database to be backed up
  2. backupDirectory – The directory where the backup file will be created

You need to execute it against the database that you will be backing up and give the “public” role execution permissions to the procedure.


Code
exec dbo.sp_executesql @statement = N'
/*
Created:
 07.18.2008 by Boyan Kostadinov (boyank@gmail.com)

Dependencies:
 None

Usage:
 exec dbo.backupDatabase ''ensembleVideo'', ''C:\Temp''

Parameters:
 @databaseName - varchar
 - The database to backup

 @backupDirectory - varchar
 - The path to where the database should be
 backed up. This should be an existing directory on
 the SQL Server where the database is located

Description:
 Backsup a given database to the specified directory
*/
create procedure dbo.backupDatabase
 @databaseName varchar(100),
 @backupDirectory varchar(1000)
as
declare @backupFileName varchar(100),
 @databaseDataFilename varchar(100), @databaseLogFilename varchar(100),
 @databaseDataFile varchar(100), @databaseLogFile varchar(100),
 @execSql varchar(1000)

-- If the backup directory does not end with ''\'', append one
if charindex(''\'', reverse(@backupDirectory)) > 1
 set @backupDirectory = @backupDirectory + ''\''

-- Create the backup file name based on the backup directory, the database name and today''s date
set @backupFileName = @backupDirectory + @databaseName + ''-'' + replace(convert(varchar, getdate(), 110), ''-'', ''.'') + ''.bak''

set @execSql = ''
backup database ['' + @databaseName + '']
to disk = '''''' + @backupFileName + ''''''
with
  noformat,
  noinit,
  name = '''''' + @databaseName + '' backup'''',
  norewind,
  nounload,
  skip''

exec(@execSql)'
go

NAnt Backup Script

The script is somewhat complex. Here are the list of features:

  • Backup a local SQL Server instance or a SQL Server instance on the same network. The key is that the file system of the networked instance must be available through UNC shares to the machine executing the backup script.
  • Use a specified connection string or read the connection string from a .NET (or other type of XML)configuration file.

I will not dive into the script itself since it's properties and flow is documented fairly well.

Code

<project name="backupDatabase" default="run" xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
 <!-- Set the name of the backup file that will be created -->
 <property name="zipFileName" value="databaseBackup" overwrite="false" />

 <!-- Set the path where the backup file will be finally stored -->
 <property name="localBackupDirectory" value="C:\Temp" overwrite="false" />

 <!-- Set the local path (relative to the SQL Server instance) where SQL Server will write the backup file -->
 <!-- This has to always be a local path since SQL server can't write to network paths -->
 <property name="sqlServerLocalBackupDirectory" value="C:\Temp" overwrite="false" />

 <!-- Set the UNC path to the above "sqlServerLocalBackupDirectory" local path -->
 <!-- This is only needed if you are backing up SQL server on the network -->
 <property name="sqlServerUNCBackupDirectory" value="\\beehive\windows$\Temp" overwrite="false" />
 <!-- For backing up a local SQL Server instance, commented the first "sqlServerUNCBackupDirectory" property -->
 <!-- and uncommented the one below this line, or set the "sqlServerUNCBackupDirectory" property to "" -->
 <!--<property name="sqlServerUNCBackupDirectory" value="" overwrite="false" />-->

 <!-- Set the .NET connection string for connecting to the database -->
 <!-- This setting always takes precedence over the config file below -->
 <!-- To use a config file instead, set this property to "" (like so value="") -->
 <property name="connectionString" value="Data Source=beehive\sql2005;Initial Catalog=ensembleVideo;User ID=test;Password=test;" overwrite="false" />

 <!-- Alternatively set the path to where the build script should get the connection string from  -->
 <!-- This is usually a app.config or connectionString.config file -->
 <property name="connectionStringConfigFilePath" value="C:\Inetpub\wwwroot\myApp\config\connectionStrings.config" overwrite="false" />

 <!-- Set the XPath expression that will be used to grab the connection string from the config file -->
 <property name="connectionStringXPath" value="/connectionStrings/add[@name = 'sqlServerConnection']/@connectionString" overwrite="false" />

 <!-- Set the regular expression that's needed to get the database name from the connection string -->
 <property name="getDatabaseNameFromConnecionStringRegEx" value="Initial Catalog=(?'databaseName'.*?);" overwrite="false" />

 <property name="todaysDate" value="${string::substring(string::replace(datetime::to-string(datetime::now()), '/', '.'), 0, 10)}" />
 <property name="todaysLocalBackupDirectory" value="${path::combine(localBackupDirectory, todaysDate)}" />

   <target name="run">
  <!-- If the SQL Server UNC directory was not specified and the local backup directory exists -->
  <if test="${string::get-length(sqlServerUNCBackupDirectory) == 0 and directory::exists(localBackupDirectory)}">
   <!-- This is a back of a local SQL Server instance -->
   <mkdir dir="${todaysLocalBackupDirectory}" />

   <property name="todaysSqlServerLocalBackupDirectory" value="${todaysLocalBackupDirectory}" />
   <property name="todaysSqlServerUNCBackupDirectory" value="${todaysLocalBackupDirectory}" />

   <property name="localSqlServer" value="true" />
  </if>

  <if test="${string::get-length(sqlServerUNCBackupDirectory) > 0 and directory::exists(sqlServerUNCBackupDirectory) }" >
   <property name="todaysSqlServerLocalBackupDirectory" value="${path::combine(sqlServerLocalBackupDirectory, todaysDate)}" />
   <property name="todaysSqlServerUNCBackupDirectory" value="${path::combine(sqlServerUNCBackupDirectory, todaysDate)}" />

   <!-- This is a back of a networked SQL Server instance -->
   <mkdir dir="${todaysSqlServerUNCBackupDirectory}" />

   <property name="localSqlServer" value="false" />
  </if>

  <!-- If the connection string is empty,
  the connection string file exists and the XPath to find the connection string is not empty -->
  <if test="${string::get-length(connectionString) == 0 and file::exists(connectionStringConfigFilePath)
       and string::get-length(connectionStringXPath) > 0}" >
   <!-- Get the connection string to the database from the connection string config file -->
   <xmlpeek
    file="${connectionStringConfigFilePath}"
    xpath="${connectionStringXPath}"
    property="connectionString">
   </xmlpeek>
  </if>

  <!-- If the connectionString property is not empty and the regular expression to get the database name is not empty -->
  <if test="${string::get-length(connectionString) > 0 and string::get-length(getDatabaseNameFromConnecionStringRegEx) > 0}">
   <!-- Get the database name from the connection string -->
   <regex pattern="${getDatabaseNameFromConnecionStringRegEx}" input="${connectionString}" />
 
       <!-- Execute the stored procedure to bckup the database -->
   <sql connstring="Provider=SQLOLEDB;${connectionString}" transaction="false" delimiter=";" delimstyle="Normal">
   exec dbo.backupDatabase '${databaseName}', '${todaysSqlServerLocalBackupDirectory}';
   </sql>

   <!-- Zip up the created databse backup file -->
   <zip zipfile="${path::combine(todaysSqlServerUNCBackupDirectory, zipFileName + '-' + todaysDate + '.zip')}" ziplevel="9">
    <fileset basedir="${todaysSqlServerUNCBackupDirectory}">
     <exclude name="**/*.zip" />
     <include name="*.*" />
    </fileset>
   </zip>

   <!-- Delete all other files in the today's backup directory except for the created zip files -->
   <delete>
    <fileset basedir="${todaysSqlServerUNCBackupDirectory}">
     <exclude name="**/*.zip" />
     <include name="*.*" />
    </fileset>
   </delete>

   <!-- Move the contents of today's backup directory to the local backup directory -->
   <move todir="${localBackupDirectory}">
    <fileset basedir="${todaysSqlServerUNCBackupDirectory}">
     <include name="*.*" />
    </fileset>
   </move>

   <!-- Delete the "todays" directories -->
   <if test="${localSqlServer}">
    <delete dir="${todaysLocalBackupDirectory}" />
   </if>

   <if test="${not localSqlServer}">
    <delete dir="${todaysSqlServerUNCBackupDirectory}" />
   </if>
  </if>
  </target>
</project>

Setting Up Backup Script and Scheduling

Review the build script and set following properties to match your setup:
  • zipFileName – The name of the zip file that will be created for the database backup
  • localBackupDirectory – The local directory where the backup will be stored
  • sqlServerLocalBackupDirectory - The local path (relative to the SQL Server instance) where SQL Server will write the backup file. This has to always be a local path since SQL server can't write to network paths.
  • sqlServerUNCBackupDirectory – The UNC path to the above "sqlServerLocalBackupDirectory" local path. This is only needed if you are backing up SQL server on the network. For backing up a local SQL Server instance, set this "”.
  • connectionString - The .NET connection string for connecting to the database. This setting always takes precedence over the “connectionStringConfigFilePath“ setting. To use a configuration file instead, set this property to "".
  • connectionStringConfigFilePath - Alternatively set the path to where the build script should get the connection string from. This is usually a app.config or connectionString.config file.
  • connectionStringXPath - The XPath expression that will be used to grab the connection string from the configuration file.
  • getDatabaseNameFromConnecionStringRegEx - The regular expression that's needed to get the database name from the connection string.


The next step is to create the schedule with either the Task Scheduler or with “at” command.

To use the Task Scheduler:

  1. Create a .bat file with the following: “driveLetter:\path\to\nant.exe /f:pathToNAntBackupScript.build” and of course replace that with the actual path to nant.exe and to the NAnt build script you got here.
  2. Go to Control Panel / Scheduled Tasks / Add Scheduled Task
  3. Browse for the .bat file you created in #1
  4. Configure the schedule run as often as you would like

To use the “at” command:

  1. Do the same as #1 above.
  2. Open a command prompt and execute the “at” command:
    ”at 23:00 /every:M,T,W,Th,F pathToBatFileFromStep1.bat”

    That will schedule the task to execute every day of the week at 11:00pm. You can get more info on the “at” command from How To Use the AT Command to Schedule Tasks.

Bonus

You don’t have to hard code the values in the NAnt build script. You can pass them from the command line. In that manner you can reuse the same script for different database. You simply need to call the script with –D:propertyName=”value" like so:

path\to\nant.exe /f:pathToNAntBackupScript.build –D:zipFileName="myDatabase" –D:localBackupDirectory="D:\Temp"

Downloads

http://tech-cats.net/blog/downloads/sql/procedure-dbo.backupDatabase.txt
http://tech-cats.net/blog/nantScripts/backupDatabase.build
http://tech-cats.net/blog/nantScripts/backupDatabase.bat

Monday, November 03, 2008

Using NAnt to Traverse Directories and Execute SQL

In the latest version of Ensemble Video I had a need to migrate the old mechanism of showing content to new way that involved a single page and some URL rewriting (more on that another time). The basics of the old mechanism were that for each publishing point used by the application, a separate directory with specific “index.aspx” was created. That was always a major pain when updates had to be applied since I always had to preserve the old files but still update existing content. Long story short, I got away from that but I still needed to figure out what publishing points were in use so I can set the database flag for the new mechanism. So, the NAnt fanatic that I am, I build a NAnt script to do that. In this example, you will learn:

  1. How to parse XML for a certain value
  2. How to traverse directories with NAnt
  3. How to execute an “if” statement in NAnt
  4. How to get file and directory names using the NAnt built-in functions
  5. How to execute SQL script with SQL task from NAntContrib

The prerequisites are that 1). You have NAnt installed and 2). You have NAntContrib installed.

So let’s go one by one from the list above and you will see the final script at the end.

  1. How to parse XML for certain value
    <xmlpeek
    file="${path::combine(path::get-full-path(webApplicationDirectory), 'config\connectionStrings.config')}"
    xpath="/connectionStrings/add[@name = 'ensembleVideoConnection']/@connectionString"
    property="connectionString" />

    This parses the XML file under webApplicationDirectory\config\connectionStrings.config and uses an XPath expression to get the property of the connectionString with the name ensembleVideoConnection. The value is stored in the NAnt property "connectionString"
  2. How to traverse directories with NAnt
    <foreach item="Folder" in="${sitesDirectory}" property="orgDirectoryName">
     <echo message="${orgDirectoryName}" />
    </foreach>

    This task traverses through the directories inside the directory specified by "sitesDirectory", stores the current directory in orgDirectoryName and then echoes the current directory.
  3. How to execute an “if” statement in NAnt
    <if test="${string::to-lower(path::get-file-name(orgDirectoryName)) != '_svn'}">
     <echo message="${path::get-file-name(orgDirectoryName)}" />
    </if>

    Here we test if the current directory name is equal to "_svn". Here is another example where we test if the current directory contains a desired file:
    <if test="${file::exists(path::combine(webSiteDirectoryName, 'index.aspx'))}">
     <property name="enableQuickPublish" value="1" />
    </if>

  4. How to get file and directory names using the NAnt built-in functions
    <-- Get only the directory name -->
    path::get-file-name(orgDirectoryName)
    <-- Get the directory name from the path -->
    path::get-directory-name(orgDirectoryName)
    <-- Create a full path based on the directory name and a file name -->
    path::combine(webSiteDirectoryName, 'index.aspx')

    You can find out more about NAnt built-in functions at NAnt Functions
  5. How to execute SQL script with SQL task from NAntContrib
    <sql
      connstring="Provider=SQLOLEDB;${connectionString}"
      transaction="true"
      delimiter=";"
      delimstyle="Normal">
     update dbo.webSites
     set isQuickPublished = ${enableQuickPublish}
     where webSiteID = (
       select webSiteID
       from dbo.listWebSites
       where webSafeOrganizationName = '${path::get-file-name(orgDirectoryName)}'
        and webSafeDepartmentName = '${path::get-file-name(departmentDirectoryName)}'
        and webSafeWebSiteName = '${path::get-file-name(webSiteDirectoryName)}'
      );
    </sql>

    There is not much to the SQL statement execution. As you can see you can use NAnt properties and function inside the SQL script. You can check out all of NAntContrib's tasks at NAntContrib Tasks

As promised here is the full contents of the build script:

<project name="renameSqlToText" default="run" xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
 <!-- Set the path to the web application -->
 <property name="webApplicationDirectory" value="D:\_Boyan's Documents\_Projects\_ensembleVideo\trunk\webApplication\app" />

 <!-- Set the name of the sites directory (under the applicaiton directory) -->
 <property name="sitesDirectoryName" value="sites" />

 <property name="sitesDirectory" value="${path::combine(path::get-full-path(webApplicationDirectory), sitesDirectoryName)}" overwrite="true" />

 <target name="run">
  <!-- Get the database connection string from the web application -->
  <xmlpeek
   file="${path::combine(path::get-full-path(webApplicationDirectory), 'config\connectionStrings.config')}"
   xpath="/connectionStrings/add[@name = 'ensembleVideoConnection']/@connectionString"
   property="connectionString" />

  <!-- For each directory in the sitesDirectory -->
  <foreach item="Folder" in="${sitesDirectory}" property="orgDirectoryName">
   <if test="${string::to-lower(path::get-file-name(orgDirectoryName)) != '_svn'}">
    <echo message="${path::get-file-name(orgDirectoryName)}" />

    <foreach item="Folder" in="${orgDirectoryName}" property="departmentDirectoryName">
     <echo message="  ${path::get-file-name(departmentDirectoryName)}" />

     <foreach item="Folder" in="${departmentDirectoryName}" property="webSiteDirectoryName">
      <echo message="    ${path::get-file-name(webSiteDirectoryName)}" />

      <!-- Set the 'enableQuickPublish' property to false -->
      <property name="enableQuickPublish" value="0" />

      <!-- If a index.aspx file exists, set the 'enableQuickPublish' property to true -->
      <if test="${file::exists(path::combine(webSiteDirectoryName, 'index.aspx'))}">
       <property name="enableQuickPublish" value="1" />
      </if>

      <!-- Run a database query to update the flag in the database -->
            <sql
              connstring="Provider=SQLOLEDB;${connectionString}"
              transaction="true"
              delimiter=";"
              delimstyle="Normal">
       update dbo.webSites
       set isQuickPublished = ${enableQuickPublish}
       where webSiteID = (
         select webSiteID
         from  dbo.listWebSites
         where  webSafeOrganizationName = '${path::get-file-name(orgDirectoryName)}'
           and
           webSafeDepartmentName = '${path::get-file-name(departmentDirectoryName)}'
           and
           webSafeWebSiteName = '${path::get-file-name(webSiteDirectoryName)}'
         );
           </sql>
     </foreach>
    </foreach>
   </if>
  </foreach>
 </target>
</project>

Downloads

http://tech-cats.net/blog/nantScripts/traverseDirectoriesAndExecuteSQL.build

Friday, October 31, 2008

Export UML to XMI from Visio 2003

A friend wanted to export his UML diagrams created with Visio to XMI files so then we could run some kind of ColdFusion tool to generate code based on the created diagrams. It appeared however that Visio did not come out of the box a way to do that. Some googling reveled that there is a add-on Microsoft offers but you have to get it, compile it (in Visual C++ non the less) and install it. What a major hassle. So being the problem solver I tend to be, I installed Visual C++ and went at it. About almost an hour later, the procedure below did the trick.

Something to Note: The supported UML diagrams are: Static Structure, Component and Deployment diagrams, and State chart (directly from Microsoft’s documentation about the add-on).

  1. Get the zip file below
  2. Copy “XMIEXPRT.DLL” to “C:\Program Files\Microsoft Office\Visio11\DLL\”
  3. Copy “VisioUmlAddon.vsl” to “C:\Program Files\Microsoft Office\Visio11\1033\”
  4. Open Visio 2003 and:
    1. Go to Tools/Options/Security/Macro Security
    2. Set the Macro Security to “Medium” or “Low”
    3. Go back to Options and go to Advanced
    4. Click on “File Paths” and set the “Add-ons” path to
      “C:\Program Files\Microsoft Office\Visio11\1033”
    5. Restart Visio 2003
  5. Now you should be able to create a UML diagram and go to Tools/Add-Ons and see “XMI Export” all the way on the bottom.

Downloads

http://tech-cats.net/blog/downloads/UML-to-XMI-for-Visio2003.zip

Wednesday, July 23, 2008

Rename All Files in a Directory with NAnt

It’s a pretty simple thing to do but I had to spend a few minutes today to figure out how to accomplish it. So here is a quick build file to rename all .sql files in a specified directory (relative to the script run directory) to .txt. Funny enough, this scripts gets called by my automation program before it uploads any SQL scripts to my blog. There is nothing to it and there are comments so here goes:

<project name="renameSqlToText" default="run" xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd">
 <!-- The relative path to the directory where the files are located -->
 <property name="sqlDirectory" value="..\downloads\sql" />

 <target name="run">
  <!-- For each file in the directory -->
  <foreach item="File" in="${sqlDirectory}" property="fileName">
   <!-- If the file has the .sql extension -->
   <if test="${string::to-lower(path::get-extension(fileName)) == '.sql'}">
    <!-- Rename the file to .txt extension -->
    <move file="${fileName}" tofile="${path::combine(path::get-directory-name(fileName), path::get-file-name-without-extension(fileName) + '.txt')}" />
   </if>
  </foreach>
 </target>
</project>

Downloads

http://tech-cats.net/blog/nantScripts/renameSqlToText.build

Tuesday, July 22, 2008

Silly Little SQL Server Information Schema Helpers

If you work with SQL Server like I do, it is often necessary to drop functions and/or procedures through T-SQL. There are two ways to do this:

  1. Using the sysobjects built-in to SQL Server
  2. Using the information schema

Microsoft says they do not recommend the sysobjects way since things might change in future version of SQL Server. Regardless, just for reference, here is how to drop a view if it exists:

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[listVideos]') and objectproperty(id, N'IsView') = 1)
drop view [dbo].[listVideos]
go

Doing the same with a stored procedure is not really much different:

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[addEncoding]') and objectproperty(id, N'IsProcedure') = 1)
drop procedure dbo.addEncoding
go

Now for the more recommended way, using the information schema:

Dropping a view:

if exists (select * from information_schema.views where table_name = 'listVideos')
  drop view dbo.listVideos
go

Dropping a stored procedure:

if exists (select * from information_schema.routines where routine_name = 'addEncoding' and routine_type = 'procedure')
  drop procedure dbo.addEncoding
go

That is all good and swell, but if you are like me, you might hate to write that long SQL statement every time, so I thought I would shorten it a bit. Here is what I came up with for querying the information schema for procedures and functions. The same can be easily applied to views and tables.

First, we need a single function to check if a procedure or a function exists. I called it “routineExists”. You can download it as a text file from http://tech-cats.net/blog/downloads/sql/routineExistsFunction.txt. Here is the script to create it in your database:

if exists (select * from information_schema.routines where routine_name = 'routineExists' and routine_type = 'function')
 drop function dbo.routineExists
go

exec dbo.sp_executesql @statement = N'
/*
Created:
 07.22.2008 by Boyan Kostadinov (boyank@gmail.com)

Dependencies:
 information_schema.routines

Parameters:
 @routineName varchar(255)
 - The name of the procedure or function

 @routineType varchar(25)
 - The type of routine (usually ''procudure'' or ''function''

Usage:
 select dbo.routineExists(''getCopyInformation'', ''procedure'')
 select dbo.routineExists(''getCopyInformation'', ''function'')
 if (select dbo.routineExists(''getCopyInformation'', ''procedure'')) = 1 print ''exists''

Returns:
 0 if the routine does not exist
 1 if the routine exists

Description:
 Checks the information schema view for the existance of the routine name passed in
*/
create function dbo.routineExists(@routineName varchar(255), @routineType varchar(25))
 returns bit
as
begin
 declare @routineExists bit
 set @routineExists = 0

 if exists (
  select routine_name
  from  information_schema.routines
  where  routine_name = @routineName
      and
      routine_type = @routineType
 ) set @routineExists = 1
 else set @routineExists = 0

 return @routineExists
end
'

The usage is quite simple:

-- Does the stored procedure 'getCopyInformation' exists, will return 0 or 1
select dbo.routineExists('getCopyInformation', 'procedure')

-- Does the function 'getCopyInformation' exists, will return 0 or 1
select dbo.routineExists('getCopyInformation', 'function')

-- If the procedure 'getCopyInformation' exists, print 'exists'
if (select dbo.routineExists('getCopyInformation', 'procedure')) = 1 print 'exists'

Better but still not quite satisfactory. Next step is wrap the "routineExists” function in individual functions, one for procedures and one for functions.

Let’s start with the procedures one. You can download it from http://tech-cats.net/blog/downloads/sql/procedureExistsFunction.txt. I creatively called it “procedureExists”:

if (select dbo.routineExists('procedureExists', 'function')) = 1 drop function dbo.procedureExists
go

exec dbo.sp_executesql @statement = N'
/*
Created:
 07.22.2008 by Boyan Kostadinov (boyank@gmail.com)

Dependencies:
 information_schema.routines

Parameters:
 @procedureName varchar(255)
 - The name of the procedure

Usage:
 select dbo.procedureExists(''getCopyInformation'')
 if (select dbo.procedureExists(''getCopyInformation'')) = 1 print ''exists''

Returns:
 0 if the procedure does not exist
 1 if the procedure exists

Description:
 A wrapper for the dbo.routineExists that explicetly checks
 for the existance of the procedure name passed in
*/
create function dbo.procedureExists(@procedureName varchar(255))
 returns bit
as
begin
 return (select dbo.routineExists(@procedureName, ''procedure''))
end
'

Usage is pretty much the same as the “routineExists” function without the “routineType” parameter:

-- Does the stored procedure 'getCopyInformation' exists, will return 0 or 1
select dbo.procedureExists('getCopyInformation')

-- If the procedure 'getCopyInformation' exists, print 'exists'
if (select dbo.procedureExists('getCopyInformation')) = 1 print 'exists'

The function one is called “functionExists”. You can download it from http://tech-cats.net/blog/downloads/sql/functionExistsFunction.txt.

if (select dbo.routineExists('functionExists', 'function')) = 1 drop function dbo.functionExists
go

exec dbo.sp_executesql @statement = N'
/*
Created:
 07.22.2008 by Boyan Kostadinov (boyank@gmail.com)

Dependencies:
 information_schema.routines

Parameters:
 @functionName varchar(255)
 - The name of the function

Usage:
 select dbo.functionExists(''getCopyInformation'')
 if (select dbo.functionExists(''getCopyInformation'')) = 1 print ''exists''

Returns:
 0 if the function does not exist
 1 if the function exists

Description:
 A wrapper for the dbo.routineExists that explicetly checks
 for the existance of the function name passed in
*/
create function dbo.functionExists(@functionName varchar(255))
 returns bit
as
begin
 return (select dbo.routineExists(@functionName, ''function''))
end
'

Example Usage:

-- Does the function 'listToTable' exists, will return 0 or 1
select dbo.functionExists('listToTable')

-- If the function 'getCopyInformation' exists, print 'exists'
if (select dbo.functionExists('listToTable')) = 1 print 'exists'

Downloads

http://tech-cats.net/blog/downloads/sql/routineExistsFunction.txt
http://tech-cats.net/blog/downloads/sql/procedureExistsFunction.txt
http://tech-cats.net/blog/downloads/sql/functionExistsFunction.txt

Friday, July 11, 2008

Using "select in" with SubSonic

While looking for SubSonic related information on something totally unrelated, I came across a post about trying to use the T-SQL "in" keyword to build a SubSonic collection at http://www.vitaminzproductions.com/technology-blog/index.php/2008/06/25/subsonic-select-where-in-solution. If that sounds confusing, imagine that you would like to have SubSonic return data that you would usually get with the following T-SQL statement:
select *
from MyTable 
where id in ('val1', 'val2', 'val3')

The author of the blog post had a solution but it was not one I liked. It seemed like a hack and I hate hacks. So, I took a few minutes and here is the result. The key is using the new SubSonic.Select query available in SubSonic v2.1. The code offers 3 different ways to accomplish the task.
C# Code:
// Define your collection object
MyTableCollection MyTableQuery;

// Define an array list for the first example
ArrayList ids = new ArrayList();
// Add some Guid values to the array list
ids.Add(new Guid("80FB189C-C3EA-4774-B15E-018DA88D3FE2"));
ids.Add(new Guid("88DE9C1D-26A7-4545-AA1E-01BDAEFF0588"));
    
// Poplulate the collection using an array list
MyTableQuery = new SubSonic.Select()
 .From(MyTable.Schema)
 .Where(MyTable.Columns.Id)
 .In(ids)
 .ExecuteAsCollection();

// Poplulate the collection using an array of strings
MyTableQuery = new SubSonic.Select()
 .From(MyTable.Schema)
 .Where(MyTable.Columns.Id)
 .In(new string[] {
  "80FB189C-C3EA-4774-B15E-018DA88D3FE2",
  "88DE9C1D-26A7-4545-AA1E-01BDAEFF0588"})
 .ExecuteAsCollection();
    
// Poplulate the collection using an inline select statement
MyTableQuery = new SubSonic.Select()
 .From(MyTable.Schema)
 .Where(MyTable.Columns.Id)
 .In(new SubSonic.Select(MyTable.Columns.Id)
 .Top("3")
 .From(MyTable.Schema))
 .ExecuteAsCollection();
VB.NET Code:
' Define your collection object
Dim MyTableQuery As MyTableCollection

' Define an array list for the first example
Dim ids As New ArrayList

' Define an array list for the first example
ids.Add(New Guid("80FB189C-C3EA-4774-B15E-018DA88D3FE2"))
ids.Add(New Guid("88DE9C1D-26A7-4545-AA1E-01BDAEFF0588"))

' Poplulate the collection using an array list
MyTableQuery = New SubSonic.Select() _
    .From(MyTable.Schema) _
    .Where(MyTable.Columns.Id) _
    .In(ids).ExecuteAsCollection(Of MyTableCollection)()

' Poplulate the collection using an array of strings
MyTableQuery = New SubSonic.Select() _
    .From(MyTable.Schema) _
    .Where(MyTable.Columns.Id) _
    .In(New String() {"80FB189C-C3EA-4774-B15E-018DA88D3FE2", "88DE9C1D-26A7-4545-AA1E-01BDAEFF0588"}) _
    .ExecuteAsCollection(Of MyTableCollection)()

' Poplulate the collection using an inline select statement
MyTableQuery = New SubSonic.Select() _
    .From(MyTable.Schema) _
    .Where(MyTable.Columns.Id) _
    .In(New SubSonic.Select(MyTable.Columns.Id) _
    .Top("3") _
    .From(MyTable.Schema)) _
    .ExecuteAsCollection(Of MyTableCollection)()

Thursday, April 10, 2008

Code Camp 9 Presentations

This past weekend, I attended Code Camp 9 in Waltham, MA and gave two presentation:

  • No Pain Database Access with Subsonic
  • Windows and Development Automation with NAnt

I am finally getting around to posting the presentation materials and supporting files here. So here goes.

No Pain Database Access with Subsonic

While I was disappointed with the turnout for this presentation and with my preparation, I learned a couple of things:

  1. I know very little about SubSonic. It's far from only an ORM framework, it's much more, and I don't know didly about it.
  2. I got to stop procrastinating

The presentation is available in two forms:

  1. Online at Google Docs:
    http://docs.google.com/Present?docid=dcz9szc5_25cz3xbtcx&skipauth=true
  2. PDF format:
    http://tech-cats.net/blog/downloads/dotnet/codeCamp9/NoPainDatabaseAccessWithSubSonic.pdf

Support files are also available in two forms:

  1. Online at Google Docs:
    http://docs.google.com/View?docid=dcz9szc5_28f8wb97fc
  2. HTML version:
    http://tech-cats.net/blog/downloads/dotnet/codeCamp9/No_Pain_Database_Access.html

Downloads

 

Windows and Development Automation with NAnt

I love NAnt. This presentation turned out great. I was prepared, I had lots of examples and people were interested - cool!

Again, the presentation is available in two forms:

  1. Online at Google Docs:
    http://docs.google.com/Present?docid=dcz9szc5_24fzjs9zgc&skipauth=true
  2. PDF format:
    http://tech-cats.net/blog/downloads/dotnet/codeCamp9/WindowsAndDevelopmentAutomationWithNAnt.pdf

Support Files:

  1. Online at Google Docs:
    http://docs.google.com/Doc?id=dcz9szc5_26gnfrmkfv
  2. HTML version:
    http://tech-cats.net/blog/downloads/dotnet/codeCamp9/Windows_and_Development.html

Downloads

Thursday, March 27, 2008

Hey WMI, Where Can I Print

Windows Management Instrumentation, or WMI for short, is a very powerful and often under used technology. I am simply fascinated by how much you can accomplish using WMI. It is mostly intended for computer management tasks but it can be used for much more. Some examples include getting all kinds of computer information, managing system services and application. This article will cover a possible solution for listing all available printers on a particular computer and show you to get this information through ColdFusion. WMI can be used through VBScript, C++ and .NET. However since part of my current work is done in .NET, the solution relies on .NET and IIS. Further more, here are the requirements that the we will satisfy:

  • List all available printers including shared printers
  • Allow for filtering the list of printers by the printer name
  • Be language independent

Accomplishing the first two tasks is fairly easy with WMI. So lets get started.

List All Available Printers

This is pretty straight forward using C# provided that you reference the System.Management namespace and add the appropriate "using" directive:

using System.Management;


Next, we need to declare some variable to query WMI:

ObjectQuery oq;
ConnectionOptions co;
ManagementScope ms;
ManagementObjectSearcher mos;
ManagementObjectCollection moc;


Set the connection options:

co = new System.Management.ConnectionOptions();
co.Authentication = System.Management.AuthenticationLevel.Default;
co.Impersonation = System.Management.ImpersonationLevel.Impersonate;


Connect to the local machine:

// Set the management scope to the local computer
ms = new ManagementScope("\\\\" + Environment.MachineName, co);


And get the list of printers:

// Create an object query to get the printer list
oq = new ObjectQuery("select * from Win32_Printer");

mos = new ManagementObjectSearcher(ms, oq);
moc = mos.Get();


What is left to do is just loop through the management objects found and display/aggregate the printer properties:

if (moc != null)
{
    // Loop through the found printers and collect the information
    foreach (ManagementObject mo in moc)
    {
         // Do something with the printer name
         // mo.Properties["Name"].Value.ToString();
    }
}

Filtering By Printer Name

There is two ways that this can be accomplished. Let's explore both.

  1. Using WMI query clause.
    WMI on Windows XP or newer supports the "like" operator so filtering the printer list by the printer name or a partial printer name is very trivial:

    // Create an object query to get the printer list
    oq = new ObjectQuery("select * from Win32_Printer where Name like \"%myPrinterName%\"");

    Where "myPrinterName" is the full name or part of the name of the printer. Notice to use of "%" as those work as a wildcard. 
  2. Using simple string compare.
    Since there are plenty of servers still using Windows 2000 the string compare method is more compatible:

    if (moc != null)
    {
        // Loop through the found printers and collect the information
        foreach (ManagementObject mo in moc)
        {
             if (mo.Properties["Name"].Value.ToString().ToLower().Contains("myPrinterName"))
             {
                  // Do something with the printer name
                  // mo.Properties["Name"].Value.ToString();
             }
        }
    }

Language Independence

A simple way to implement languages independence is through a web service. So we take the code above and wrap it in a web service. The two addition are the separation of the code in two functions (getPrinters() and getPrintersWithNameFilter()) and the addition of a small helper function called saveProperties() . Here is the final result:

using System;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Text;
using System.Web;
using System.Web.Services;
using System.Web.Services.Protocols;
using System.Management;
using System.Xml;
using System.Xml.Serialization;
using System.IO;
using System.Reflection;

namespace systemInfo
{
    /// <summary>
    /// Contains system related functions
    /// </summary>
    [WebService(Namespace = "http://webapps/systemInfo")]
    [Description("Contains a set of printer related functions")]
    public class systemInfo : System.Web.Services.WebService
    {
        /// <summary>
        /// Returns a list of available printers
        /// </summary>
        /// <returns>systemData object serialized to XML</returns>
        [WebMethod]
        [Description("Returns a list of available printers")]
        public systemData getPrinters()
        {
            return getPrintersWithNameFilter(null);
        }

        /// <summary>
        /// Returns a list of available printers filtered by the name specified
        /// </summary>
        /// <param name="nameFilter">The name of the printer to filter by</param>
        /// <returns>systemData object seralized to XML</returns>
        [WebMethod(MessageName = "getPrintersWithNameFilter")]
        [Description("Returns a list of available printers filtered by the name specified")]
        public systemData getPrintersWithNameFilter(string nameFilter)
        {
            ObjectQuery oq;
            ConnectionOptions co;
            ManagementScope ms;
            ManagementObjectSearcher mos;
            ManagementObjectCollection moc;
            int counter = 0;
            ArrayList properties = null;
            systemDataPrinter printer = null;
            systemData systemDataInstance = new systemData();

            if (!String.IsNullOrEmpty(nameFilter))
            {
                nameFilter = nameFilter.ToLower();
            }

            co = new System.Management.ConnectionOptions();
            co.Authentication = System.Management.AuthenticationLevel.Default;
            co.Impersonation = System.Management.ImpersonationLevel.Impersonate;

            // Set the management scope to the local computer
            ms = new ManagementScope("\\\\" + Environment.MachineName, co);

            // Create an object query to get the printer list
            oq = new ObjectQuery("select * from Win32_Printer");

            mos = new ManagementObjectSearcher(ms, oq);
            moc = mos.Get();

            if (moc != null)
            {
                // Create an array of printer objects to store the printer data
                systemDataInstance.printer = new systemDataPrinter[moc.Count];

                // Loop through the found printers and collect the information
                foreach (ManagementObject mo in moc)
                {
                    properties = new ArrayList();
                    properties.Add(mo.Properties["Name"]);
                    properties.Add(mo.Properties["Location"]);
                    properties.Add(mo.Properties["ShareName"]);
                    properties.Add(mo.Properties["Description"]);
                    properties.Add(mo.Properties["Caption"]);

                    if (!String.IsNullOrEmpty(nameFilter))
                    {
                        // Filter the printers returned by the name filter specified
                        // Also, exclude printers that start with "__" since those are printer sessions
                        if (mo.Properties["Name"].Value.ToString().ToLower().Contains(nameFilter)
                        && !mo.Properties["Name"].Value.ToString().StartsWith("__"))
                        {
                            printer = new systemDataPrinter();

                            // Save all the properties from the array list
                            // to the printer object
                            saveProperties(properties, ref printer);

                            // Add the current printer object to the array of printers
                            systemDataInstance.printer[counter] = printer;

                            counter++;
                        }
                    }
                    // Exclude printers that start with "__" since those are printer sessions
                    else if (!mo.Properties["Name"].Value.ToString().StartsWith("__"))
                    {
                        printer = new systemDataPrinter();

                        // Save all the properties from the array list
                        // to the printer object
                        saveProperties(properties, ref printer);

                        // Add the current printer object to the array of printers
                        systemDataInstance.printer[counter] = printer;

                        counter++;
                    }
                }
            }

            return systemDataInstance;
        }

        /// <summary>
        /// Saves the properties passed by the ManagementObject into
        /// an instance of the printer object
        /// </summary>
        /// <param name="properties">A list of printer properties</param>
        /// <param name="printer">The printer object where the properties will be stored</param>
        private void saveProperties(ArrayList properties, ref systemDataPrinter printer)
        {
            PropertyInfo pi = null;
            string value = String.Empty;

            foreach (PropertyData prop in properties)
            {
                value = Convert.ToString(prop.Value);

                // Get the printer property corresponding to the current property name
                pi = printer.GetType().GetProperty(prop.Name, BindingFlags.Public | BindingFlags.Instance | BindingFlags.IgnoreCase);

                if (pi != null)
                {
                    // Set the value of the current property through reflextion
                    pi.SetValue(printer, value, null);
                }
            }
        }
    }
}

Consuming with ColdFusion

Getting the information from the web service and into ColdFusion readable format is as easy as calling any other web service. First you create a web service object and point it to the WSDL URL:

<cfset systemInfoService = createObject("webservice", "http://webapps/systemInfo/systemInfo.asmx?WSDL") />


Next, you call the getPrinters() function to get a list of all the printers installed on the computer:

<cfset printersArray = systemInfoService.getPrinters().getPrinter() />


the getPrinter() on the end returns an array of printer objects:

getPrintersCFDump

If you like to filter the list of printers and get only the printers that are installed on a different server but shared locally, you just need to pass a string containing part of the server name:

<cfset printersArray = systemInfoService.getPrintersWithNameFilter(nameFilter = "\\ntserver_hq5").getPrinter() />


Where "ntserver_hq5" is the name of the server where the printer is actually installed. For network printers installed locally, WMI returns the server name as part of the printer name.

Once you perform the web service call, you simply output the printers array as follows:

<cfoutput>

<ul>
<cfloop from="1" to="#arraylen(printersArray)#" index="i">
 <li>
   #printersArray[i].getName()#
    <ol>
      <li>
      Location: #printersArray[i].getLocation()#
      </li>
      <li>
      ShareName: #printersArray[i].getShareName()#
      </li>
      <li>
      Description: #printersArray[i].getDescription()#
      </li>
      <li>
      Caption: #printersArray[i].getCaption()#
      </li>
    </ol>
  </li>
</cfloop>
</ul>

</cfoutput>
getPrintersWithNameFilterOutput

Implementation Details

The service returns four properties for each printer. Here is an example XML:

<?xml version="1.0" encoding="utf-8" ?>
<systemData xmlns:xs="http://webapps/systemInfo">
  <printer>
    <name>t</name>
    <location>t</location>
    <shareName>t</shareName>
    <description>t</description>
    <caption>t</caption>
  </printer>
  <printer>
    <name>t</name>
    <location>t</location>
    <shareName>t</shareName>
    <description>t</description>
    <caption>t</caption>
  </printer>
</systemData>


WMI actually returns quite a few properties for each printer. However, I chose only the Name, Location, ShareName, Description and Caption as relevant.

Once the sample XML is defined, Visual Studio makes it pretty easy to generate a XML Schema from your sample XML by going to the "XML" menu and clicking on "Create Schema"

createXMLSchema

You end up with the following schema:

<?xml version="1.0" encoding="utf-8"?>
<schema xmlns:xs="http://webapps/systemInfo" attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema">
  <element name="systemData">
    <complexType>
      <sequence>
        <element maxOccurs="unbounded" name="printer">
          <complexType>
            <sequence>
              <element name="name" type="string" />
              <element name="location" type="string" />
              <element name="shareName" type="string" />
              <element name="description" type="string" />
              <element name="caption" type="string" />
            </sequence>
          </complexType>
        </element>
      </sequence>
    </complexType>
  </element>
</schema>


To convert the schema to a C# class so we can populate it at runtime, serialize it and return it we can use the xsd.exe utility with the following arguments:

C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin\xsd.exe "path\to\schema.xsd" /classes /nologo /out:"path/to/output.cs"


This way we end up with a serializable C# class which can be populated at runtime and still be serialized as our example XML. I called mine systemData and used it above as:

// Create new instance of the systeData class
systemData systemDataInstance = new systemData();

// Create a new instance of the systemDataPrinter class
systemDataPrinter printer = null;

// Create an array of printer objects to store the printer data
systemDataInstance.printer = new systemDataPrinter[moc.Count];


where moc.Count is the number of WMI ManagementObject that was returned for the WMI query.

The last point worth mentioning is inside the saveProperties() function. The function takes an array list of properties, loops over them and for each one sets the according property in the systemDataPrinter class. It does so by using reflextion:

// Get the printer property corresponding to the current property name
pi = printer.GetType().GetProperty(prop.Name, BindingFlags.Public | BindingFlags.Instance | BindingFlags.IgnoreCase);

if (pi != null)
{
    // Set the value of the current property through reflextion
    pi.SetValue(printer, value, null);
}

Gotchas

  1. To get network printers listed you have to have the printer installed on the machine under a local account and configure the web service to use the account when performing the query. You use the "impersonate" tag and specify the user and the password:

    <identity impersonate="true" userName="user@domain.com" password="password" />

  2. Once you use xsd.exe to generated the C# serializable class, you have to edit it and add the namespace for the root node attribute:

    [System.Xml.Serialization.XmlRootAttribute(Namespace = "http://webapps/systemInfo", IsNullable = false)]
    
  3. The web service has to be setup in IIS by creating an application and setting the directory security to allow anonymous access.
    iisCreateApplication iisSetDirectorySecurity
  4. The web service has to be configured to allow calls through HTTP get inside the Web.config:

    <webServices>
      <protocols>
        <add name="HttpGet"/>
      </protocols>
    </webServices>

Downloads

Web Service Implementation: systemInfo.printer.zip

Monday, March 24, 2008

Simple Logging in Your .NET Application

Logging can be a very useful tool when developing any application. Furthermore, logging is a must when deploying applications to a production environment. Keeping track of errors and informational messages can help you greatly reduce the time it takes to debug and fix a problem. As in any language, there is more than one way to enable logging in .NET. This article will cover getting started with logging using the free log4net framework.

To Get Started

  1. Get log4net from http://logging.apache.org/log4net/download.html
  2. Create a sample .NET application that and call it "sampleLoggingApp"
  3. Create a place to store the log4net DLL, I usually create a "dependencies" directory
  4. Copy the extracted log4net DLL from "bin\net\2.0\release\log4net.dll" to your "dependencies" directory
  5. Set your Visual Studio to show all files, include the log4net DLL in your project and add a reference to it in your project

The Fun Stuff

Configuration

  1. Create your log4net configuration file, we will call it log4net.config to keep appending to the same file:

    <log4net>
      <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
        <file value="sampleLog.log" />
        <layout type="log4net.Layout.PatternLayout">
          <conversionPattern value="%date [%thread] %-5level %logger [%ndc] - %message%newline" />
        </layout>
      </appender>
    
      <root>
        <level value="DEBUG" />
        <appender-ref ref="RollingLogFileAppender" />
      </root>
    </log4net>

  2. Set the path to the log4net.config file:

    private static String m_log4netConfigFile = Path.Combine(Path.GetDirectoryName(Application.ExecutablePath), "log4net.config");

    This sets the path of the log4net.config file to be in the same directory are your executable.

  3. Create a static log instance:
    private static readonly ILog m_log = LogManager.GetLogger(typeof(Form1));

    Where "Form1" is the name of your application instance.
  4. Set a build event in Visual Studio that will copy the log4net configuration file in the output directory on every build:

    xcopy /y "$(ProjectDir)log4net.config" "$(TargetDir)"

  5. Write some information to the log:

    m_log.Debug("Inside Form1_Load");

More Information

I have covered the basics of getting started. However, log4net has quite a few more possible configurations such as

  • Log to a database table
  • Log to the console
  • Log to the event log
  • Log to an email address

More information for each configuration can be found at http://logging.apache.org/log4net/release/config-examples.html

Downloads

sampleLoggingApp.zip

Thursday, March 20, 2008

Model-Glue Tips and Tricks

  1. Turn off automatic reloading of your application


    This will greatly reduce the time the application needs to process all request. This is the way I develop Model-Glue applications. In your Model-Glue configuration (the "modelGlueConfiguration" bean in ColdSpring.xml), look for the "reload" property and set it to false:

    <property name="reload">
    <value>false</value>
    </property>

    Keep in mind that changes made to anything else but your views will require an application reload. For my approach to that, read the next tip.
  2. Use two browsers


    The basics of this point has to do with using one browser for your development and one for reloading the application. I suggest this since if your application keep session state, reloading the application from the same browser will reset the session while reloading the application from a different browser will not. If you turn the Model-Glue automatic reload, your application will run faster but you will not be able to see any changes made in your controllers or model. Once you made a change you will need to manually reload the application by invoking the "reloadKey" with the "reloadPassword" as defined in the ColdSpring.xml file. By default the "reloadKey" is set to "init" and the "reloadPassword" is set to "true". So you would reload by invoking "index.cfm?init=true". To take this further I add the following code to my onRequestStart inside Application.cfc to reload the application if you simply go to "/yourApplication/?init":

    <cffunction name="OnRequestStart" output="no">
    <cfif structKeyExists(url, "init")>
    <cfset structclear(Application) />
    <cfset structclear(Session) />

    <cfset onApplicationStart() />
    </cfif>
    </cffunction>
  3. Turn on debugging


    Debugging is enabled by default but to turn it on or off you change the "true/false" value inside the "modelGlueConfiguration" bean in your ColdSpring.xml" file:

    <property name="debug">
    <value>true</value>
    </property>
  4. Use a custom ColdSpring bean for your configuration settings


    To store your own configuration values to be used throughout the application you can take advantage of Model-Glues " SimpleConfig bean. This bean configuration will go in your ColdSpring.xml file and look something like:

    <bean id="applicationConfiguration" class="ModelGlue.Bean.CommonBeans.SimpleConfig">
    <property name="config">
    <map>
    <!-- The network path where the bartender templates are stored -->
    <entry key="networkPathToLabelTemplates">
    <value>\\larry\indium\Production\labelGeneration</value>
    </entry>
    </map>
    </property>
    </config>

    To access your applicationConfiguration bean, you would use:
    getModelGlue().getBean("applicationConfiguration", true)

    I usually do this inside the "init" method of my controller:
    <cfset variables.appConfig = getModelGlue().getBean("applicationConfiguration", true) />

    And use it later on with:

    <!--- Get the path to the label templates from the configuration --->
    <cfset var networkPathToLabelTemplates = variables.appConfig.getConfigSetting("networkPathToLabelTemplates") />
  5. Take advantage of an ORM framework such as Reactor


    Initial versions Model-Glue used to come with Reactor but do no longer. I found it is a little difficult for a novice to start using Reactor with Model-Glue. To take advantage of Reactor, you have to download it, unzip it and create a ColdFusion mapping called "reactor" that points to your Reactor directory. Next comes the Model-Glue configuration. You need to add the following bean definitions to your ColdSpring.xml:

    <bean id="ormAdapter" class="ModelGlue.unity.orm.ReactorAdapter">
    	<constructor-arg name="framework">
    		<ref bean="ModelGlue" />
    	</constructor-arg>
    </bean>
    
    <bean id="ormService" class="reactor.reactorFactory">
    	<constructor-arg name="configuration">
    		<ref bean="reactorConfiguration" />
    </constructor-arg>
    </bean>


    And of course you need the "reactorConfiguraiton" bean:
    <bean id="reactorConfiguration" class="reactor.config.config">
    <constructor-arg name="pathToConfigXml">
    <value>/labelGeneration/config/Reactor.xml</value>
    </constructor-arg>

    <property name="project">
    <value>labelGeneration</value>
    </property>

    <property name="dsn">
    <value>labelGeneration</value>
    </property>

    <property name="type">
    <value>mssql</value>
    </property>

    <property name="mapping">
    <value>/labelGeneration/model</value>
    </property>

    <property name="mode">
    <value>production</value>
    </property>
    </bean>
    Now you can take advantage of Model-Glue's built in scaffolding and generic messages (genericList, genericRead, genericCommit and genericDelete).
  6. Use event beans


    Event bean is a predefined CFC that will store values form your form. It works by creating a one to one relationship between your form values and getters/setters in your CFC. Once you have your event bean created you can populate it from everything in the form scope by using the makeEventBean function:

    <!--- Create an instance of the bean --->
    <cfset var completePrintJobFormBean = getModelGlue().getBean("completePrintJobFormBean") />

    <cfset arguments.event.makeEventBean(completePrintJobFormBean) />

    <!--- Trace the label data id field --->
    <cfset arguments.event.trace("labelDataID", completePrintJobFormBean.getLabelDataID()) />

    Creating event beans has to do with defining getters/setters and local variables for each one of your form fields. It can be a tedious process but thankfully there is the Rooibos Generation to do it for you by simply taking the names of your form fields and generating all the code for your. You can learn more about this at Dan Wilson's blog article about So you want to create a ModelGlue:Unity application? ( Part 3 ).

    To use your newly generated bean, you will have to add the bean definition to your ColdSpring.xml file:

    <bean id="completePrintJobFormBean"
    class="labelGeneration.model.completePrintJobFormBean"
    singleton="false" />

    Keep in mind that if your form changes, you have to regenerate the bean. If you do not care about reusing your beans through external consumers, such as Flex, I have written about an alternative approach in "Get All Form Fields in ModelGlue the Elegant Way".
  7. Use tracing to debug the value of your variables


    Once you have debugging enabled, you can add values to your debug output by using the event.trace() method:

    <cfset arguments.event.trace("myVariableValue", myVariable) />
  8. Split your configuration in multiple files


    You can tell ModelGlue to include different configuration files by using the "include template" directive. I use this approach to separate my events based on the action they are related to:

    <modelglue>
    <include template="./config/events/labelEvents.xml"/>
    ....
    </modelglue>

    Here is what the included files looks like:
    <modelglue>
    <event-handlers>

    <event-handler name="overRideLabelData">
    <views>
    <view name="body" template="frmOverRideLabelData.cfm" />
    </views>

    <results>
    <result do="strippedApplicationTemplate" />
    </results>
    </event-handler>

    </event-handlers>
    </modelglue>

Monday, March 17, 2008

Querying the File System - From ColdFusion to SQL Server

While looking for a solution of a different problem, I realized that SQL Server has a feature called extended stored procedures. Extended stored procedures, at least in SQL Server 2000, have to be written in C++ and compiled to a DLL. A good example of such a procedure is my previous article on Using Regular Expression in SQL Server. I am not going to get into how to write extended stored procedures in this article since my C++ skills are rusty at best. Instead, I will talk about using an existing stored procedure to query the file system and how to move from using ColdFusion to using SQL Server.

Using ColdFusion

Getting a list of files is pretty trivial in ColdFusion. All you have to do is use the "<cfdirectory>" tag like so:

<cfset localDirectoryPath = "C:\Temp" />

<cfdirectory
	name="textFiles"
	action="list"
	filter="*.txt"
	directory="#localDirectoryPath#" />


That will get a list of all files with the ".txt" extension in the "C:\Temp" directory.

Getting a list of files from a network drive is not any different except for specifying the networks path as UNC path (and a little service configuration):

<cfset networkDirectoryPath = "\\larry\share" />

<cfdirectory
	name="textFiles"
	action="list"
	filter="*.txt"
	directory="#networkDirectoryPath#" />


As stated above, this will not work if you do not have the ColdFusion service configured to use a specific account that has access to your network path.

The typical ColdFusion service setup looks like the following screen shot when accessed through the "Services" configuration in Windows:

coldFusionServiceNoUser

To be able to query a network path, you need to specify a user that has access to that network path:

coldFusionServiceUser

Using SQL Server

The setup for using SQL Server for querying network paths is the same as the one for ColdFusion:

Default Setup sqlServerServiceNoUser But should be something like sqlServerServiceUser

If you do not setup the SQL Server to run as the appropriate user, trying to query a network path with the method provided here, you will an access denied error:

sqlServerAccessDenied

With that aside, I have written a custom stored procedures based on the built-in extended stored procedure "xp_cmdshell". This procedure will return a query with the following:

  • fileID - unique auto incremented integer value
  • fileName - the name of the file
  • lastModifiedOn - the date the file was last modified
  • fileSize - the size of the file in bytes

Here are the results of querying the local path "C:\Temp" without a file extensions filter:

exec dbo.getDirectoryFileList 'c:\temp', null

queryingFileSystemNoFilter

Once configured the procedure can also work on network drives like so to return only files with the ".btw" extension:

exec dbo.getDirectoryFileList '\\larry\indium\Production\labelGeneration', '*.btw'

queryingFileSystemWithFilter

Show Me the Code

/*
Type:		Stored Procedure
Name:		dbo.getDirectoryFileList
Author:		Boyan Kostadinov
Created:	03.17.2008
Dependencies:	master.dbo.xp_cmdshell
Usage:		exec dbo.getDirectoryFileList 'c:\temp', null
		exec dbo.getDirectoryFileList '\\larry\indium\Production\labelGeneration', '*.btw'
Parameters:	@directoryPath varchar(255)
		- The path of the local or network directory

		@fileExtensionFilter varchar(10) - Optional
		- The file extension to filter the file list by
Returns:	A list of files found on the file system
*/
create procedure dbo.getDirectoryFileList
	@directoryPath varchar(255),
	@fileExtensionFilter varchar(10) = null
as

set nocount on

-- Declare and initialize local variables
declare @dosCommand varchar(5000)
set @dosCommand = ''

-- If the file extension fileter was empty, set it to all files
if @fileExtensionFilter is null or ltrim(rtrim(@fileExtensionFilter)) = ''
	set @fileExtensionFilter = '*.*'

-- If the directory path does not have an ending '\', append one
if substring(@directoryPath, len(@directoryPath), 1) <> '\'
	set @directoryPath = @directoryPath + '\'

-- Build the dos command to get a list of files
select @dosCommand =
	'insert into #tempFileList(fileListRow) ' +
	'exec master.dbo.xp_cmdshell ''dir ' + @directoryPath + + @fileExtensionFilter + ''''

-- Create a temporary table to store the file list
create table #tempFileList (
	fileListRow varchar(1000) null
)

-- Create the #fileList temporary table to store the file list
create table #fileList (
	fileID int primary key identity(1,1) not null,
	[fileName] varchar(255) not null,
	lastModifiedOn datetime not null,
	fileSize bigint not null,
)

exec(@dosCommand)

-- 8 - Delete unneeded data from the #OriginalFileList
delete from #tempFileList
where fileListRow is null

delete from #tempFileList
where fileListRow like '%Volume%'

delete from #tempFileList
where fileListRow like '%Directory%'

delete from #tempFileList
where fileListRow like '%<DIR>%'

delete from #tempFileList
where fileListRow like '%bytes%'

if not exists (select * from #tempFileList where fileListRow like '%access is denied%')
begin
	-- Populate the #fileList table with the final data
	insert into #fileList(lastModifiedOn, fileSize, [fileName])
	select	ltrim(substring(fileListRow, 1, 10))
		+
		' '
		+
		rtrim(ltrim(substring(fileListRow, 11, 15)))
		+
		'm'
		as 'lastModifiedOn',
		replace(ltrim(substring(fileListRow, 21, 18)), ',', '') as 'fileSize',
		ltrim(substring(fileListRow, 40, 1000)) as 'fileName'
	from	#tempFileList

	select * from #fileList
end
else
	select fileListRow as errorMessage from [#tempFileList] as e

-- Drop the temporary tables
drop table #tempFileList
drop table #fileList

set nocount off
go

Download

You can download the stored procedure from http://tech-cats.net/blog/downloads/sql/getDirectoryFileList.txt

References

Accessing the Windows File System from SQL Server

Tuesday, March 11, 2008

Object Oriented Programming with Prototype.js

Object oriented programming has been around for quite some time and was made popular by C++ back in the day. Nowadays, even web scripting languages support the paradigm. JavaScript does not have true OO but allows you to use the design pattern in your code. Today, we will dive into using object oriented programming with the popular JavaScript framework Prototype.

Prototype makes it easy to declare your own objects by using the "Class.create()" method:

var sampleObject = Class.create();


Once you do that you can start writing your class specific methods and properties inside the "prototype" object:

sampleObject.prototype = {
}


To declare private properties you simply specify the name of the property followed by a colon and the value:

sampleObject.prototype = {
	linkIDs: ['mainPageLink', 'cdmScreenLink', 'adminLink', 'helpLink'],
	statusMessage: '',
	currentLinkID: 'myLink'
}


The important thing to remember is that each property needs to be separated from the next by a comma. As shown above, property values can be a array, a number, a string but cannot be empty value. While:

currentLinkID: 'myLink',


is valid while

currentLinkID:,


is not.

Declaring your own functions is not any different. Before we look into functions however, there is one function that requires special attention: the "initialize" function:

initialize: function() {
}


The initialize function is what JavaScript calls automatically when you create an instance of your object. If you are familiar with OO, this function is the constructor of your object. Any setup and initial requirements for using your object should be done in here.

So to get back to regular functions, they are declared in the format functionName: function() {} as in:

doSomething: function() {
}

So far your object should like like this:

var sampleObject = Class.create();
sampleObject.prototype = {
	linkIDs: ['mainPageLink', 'cdmScreenLink', 'adminLink', 'helpLink'],
	statusMessage: '',
	currentLinkID: 'myLink',
	initialize: function() {
	},
	doSomething: function() {
	}
}


Big deal right, including that in your html page and/or a separate JavaScript file does not do anything for you. The next step in making it of any use is to actually create an instance of the object like so:

var sampleObjectInstance = new sampleObject();


Creating an instance of the object automatically calls all your code inside the "initialize" function. Here, a good practice is to wrap the creation of your object inside the windows load or dom:loaded (Prototype v1.6) event like:

Event.observe(window, 'load', function() {
	var sampleObjectInstance = new sampleObject();
});


Or

document.observe("dom:loaded", function() {
	var sampleObjectInstance = new sampleObject();
});


To expand on using properties inside your object, whenever you want to access a property such as "currentLinkID", you have to prefix it with "this" as in:

this.statusMessage = 'Who Am I?';


That is because inside your object's function, without "this", the code does not know about the property. The same applies to using function so you cannot simply called the "doSomething" function with:

doSomething();


but instead if you have to use:

this.doSomething();


This can get a little more complicated when it comes to using event listeners inside your code. Let me elaborate. To tie an event observer that will call the "doSomething" function when a users clicks the link with ID 'myLink', you would usually do:

$(this.currentLinkID).observe('click', this.doSomething);


While that will work, if you try to access any class properties (such as "statusMessage") inside the "doSomething" function, you will get "undefined" for their values. That is because, again as pointed out above, the function is not aware that it belongs to an object so it does not know that the object has properties. The remedy is simple, simply append .bind(this) to the function when it is tied to the event as in:

$(this.currentLinkID).observe('click', this.doSomething.bind(this));


A similar approach needs to be applied when using the prototype built-in Ajax object. If you want to tie your custom functions to the "onFailure", "onComplete" or "onSuccess" functions, you need to use the "bindAsEventListener" function:

onSuccess: this.showContent.bindAsEventListener(this)


"Bind" also needs to be used whenever you employ the "each" construct as described in Gotcha with Prototype.Bind and Arrays

Below is the full class:

var sampleObject = Class.create();
sampleObject.prototype = {
	linkIDs: ['mainPageLink', 'cdmScreenLink', 'adminLink', 'helpLink'],
	statusMessage: '',
	currentLinkID: 'myLink',
	initialize: function() {
		this.statusMessage = 'Who Am I?';

		$(this.currentLinkID).observe('click', this.doSomething.bind(this));
	},
	doSomething: function() {
		alert(this.statusMessage);

		new Ajax.Request(
			$(this.currentLinkID).href,
			{
			method: 'get',
			onSuccess: this.processContent.bindAsEventListener(this),
			evalScripts: true
			}
		);
	},
	processContent: function(request) {
	}
}

Event.observe(window, 'load', function() {
	var sampleObjectInstance = new sampleObject();
});


To call the functions of the sampleObject from outside you would simply do:

sampleObjectInstance.doSomething();

While you can access it's properties with the syntax:

alert(sampleObjectInstance.statusMessage);

That concludes the basic guide to using object oriented programming with Prototype. Did I miss anything to get you started?

Thursday, March 06, 2008

How to Extract Video Still Frames with MPlayer

While working on a video content management system, I was in need of capturing frames from video files so they can be used as a preview for video. The system already had some code in place but it only worked for video encoding in Windows Media format (.wmv extension). That would not do, I thought, not in this day and age when we have so many file formats and video codecs. So I need to able to:

  1. Capture video frames from within .NET code
  2. Capture images from all kinds of different video formats
  3. Not reinvent the wheel while satisfying #1 and #2

Enter MPlayer

Stolen directly from the Information page of MPlayer, "MPlayer is a movie player which runs on many systems (see the documentation). It plays most MPEG/VOB, AVI, Ogg/OGM, VIVO, ASF/WMA/WMV, QT/MOV/MP4, RealMedia, Matroska, NUT, NuppelVideo, FLI, YUV4MPEG, FILM, RoQ, PVA files, supported by many native, XAnim, and Win32 DLL codecs. You can watch VideoCD, SVCD, DVD, 3ivx, DivX 3/4/5 and even WMV movies..". To add my own description, MPlayer is an open source command line video player.

So What

Metallica has an old song called "So What", great tune for the metal heads in all of us. The answer is simple, MPlayer can do a lot more than play video. It can capture images, stream video over http and even transcode video from one format to another (but the last feature is grounds for another article).

Putting It Together

You will need:

Setup

  1. Extract the mPlayer executable (mplayer.exe)
  2. Extract the mPlayer codecs in a directory called "codecs" in the same directory as the mPlayer executable
  3. Add a reference to the mPlayerWrapper project or compiled DLL

Usage

  • Capture with default arguments

    in C#
    // Specify the path to the video file
    string videoFilePath = @"drive letter:\path\to\myVideo.mpg";
    // Declare the mplayer instance with the mplayer executable residing in the
    // same directory as your executable
    mPlayerWrapper mPlayerInstance = new mPlayerWrapper();
    // Capture frames
    mPlayerInstance.captureFrames(videoFilePath);
    

    in VB.NET
    ' Specify the path to the video file
    dim videoFilePath As String = "drive letter:\path\to\myVideo.mpg"
    ' Declare the mplayer instance with the mplayer executable residing in the
    ' same directory as your executable
    dim mPlayerInstance As new mPlayerWrapper()
    ' Capture frames
    mPlayerInstance.captureFrames(videoFilePath)
    

    This will capture 12 frames with 5 second interval between each frame and put them in the same directory as your video file. The filename of each frame will be "myVideo_thumb01.jpg" to "myVideo_thumb12.jpg". Each frame will be scaled to 270x200.
  • Capture arguments
    • mPlayerPath - sets the path where the mPlayer executable (mplayer.exe) is located. The default is in the same directory as your code is executing.
    • currentFilePath - sets the file path to the video file you are using
    • cleanOutputDirectory - deletes all the "jpg" images in the capture output directory before capturing
    • captureInterval - sets the interval at which frames will be captured. Only applicable if using a time interval capture method (as outlined below)
    • numberOfFramesToCapture - the number of frames to be captured
    • captureExactNumberOfFrames - tells the wrapper to attempt to capture the exact number of frames as specified by the "numberOfFramesToCapture" property. This can be used if a file has too few frames but you still want to capture an exact number
    • useTimeSeekToCapture - used to set the wrapper method of capture to seeking through the file instead of capturing a frame at an interval
    • thumbnailPrefix - the prefix to be used when creating the filenames for captured frames. The default is the name of the video with "_thumb" append to it as in "myVideo_thumb01.jpg"
    • capturedFrameWidthHeight - the width:height that each frame will be scaled to. The default is 270:200
    • scaleCapturedFrames - used in conjunction with the "capturedFrameWidthHeight" property to scale down the captured frames. Set to true by default
  • Capture in a different output directory
    in C#
    // Specify the path to the video file
    string videoFilePath = @"drive letter:\path\to\myVideo.mpg";
    // Specify the output directory
    string outputPath = @"drive letter:\path\to\output directory";
    // Declare the mplayer instance with the mplayer executable residing in the
    // same directory as your executable
    mPlayerWrapper mPlayerInstance = new mPlayerWrapper();
    // Capture frames
    mPlayerInstance.captureFrames(videoFilePath, outputPath);
    

    in VB.NET
    ' Specify the path to the video file
    dim videoFilePath As String = "drive letter:\path\to\myVideo.mpg"
    ' Specify the output directory
    dim outputPath As String = @"drive letter:\path\to\output directory"
    ' Declare the mplayer instance with the mplayer executable residing in the
    ' same directory as your executable
    dim mPlayerInstance As new mPlayerWrapper()
    ' Capture frames
    mPlayerInstance.captureFrames(videoFilePath, outputDirectory)
    
  • Using distinct capture methods:
    1. captureFramesWithInterval - by default captures a frame every 5 seconds with up to 12 frames
    2. captureFramesWithTimeSeek - by default captures 1 frame each second with up to 12 frames by seeking through the file
  • Bonus
    • getFileProperties - returns a SortedList of video and audio properties for the file
    • getAudioProperties - returns a SortedList of audio properties for the file
    • getVideoProperties - returns a SortedList of video properties for the file

Downloads

mPlayerWrapper: http://blog.tech-cats.net/examples/dotnet/mPlayerWrapper.dll
mPlayerWrapper Source: http://blog.tech-cats.net/examples/dotnet/mplayerWrapper-v0.2.zip

// //]]>