Detecting T-SQL code smells – Part 2

Detecting T-SQL code smells – Part 2

For the second part of this series, I’m relying on a project hosted on codeplex which can be found by the name TSQL Code Smells Finder.  The solution consists out of 3 files (one PowerShell script, a text file containing a list with possible errors and a whitelist file) and build within PowerShell, therefore it’s not that complicated to extend the solution later.

Detecting T-SQL code smells: getting started

Before covering the internals, which is required if given that we would like to add new validation rules, it’s recommended to just give it a try on the provided test files.

Note: The TSQL Code Smells Finder project can be downloaded from the following location:

Running the project without providing any parameters

If you have extracted the archive file, you will find the following items;

  • errors.txt – which contains metadata about the possible validation rules
  • WhiteListFiles.txt – can be used to exclude files
  • TSQLSmells.ps1 – contains the script
  • A folder called TSQLSmells which contains some Test cases.

To initiate a test run, just call the script wherever you’ve stored the files.


At this point you should see some processing feedback within the prompt and just a bit later the review GUI should be popping up. The processed file selection menu is located on the left and the list of detected issues can be found above the feedback area.

The feedback area will highlight the relevant code block which is great for small to medium size databases. For larger database you might want to split the input into logical chunks or build a report based on the smells.xml output file which contains the same information in XML format.

The TSQL Code Smells Finder review GUI.

The TSQL Code Smells Finder review GUI.

Note: The file location can be provided as an argument $FolderPath; More options and details can be found on the project website

TSQL Code Smells Finder internals.

I won’t be covering all the details, given that the source is available for everyone. However, I do want to cover some basics on how things setup allowing us to extend the script.

The routine Process-File is one of the main routines and the place where the file will be read and parsed. The project utilizes the TSQL110parser class which eliminates the need of implementing custom regular expressions parsing logic/drama. The parsed results of TSQL110parser already contains metadata which simplifies the process of detecting how many and what kind of SQL statements need to be processed.

Sample on how to retrieve optimizer hints. $Stmt is a single statement after parsing the results from TSQL110parser.Parse(..). However, the parser won’t return single statements but batches instead. And A batch might be a single statement or a collection of related statements (see Extract-Statements for details)

More infromation about the TSQL110parser class can be found here

Process-Statement will pass the data to a more specific function which is where the actual validation and feedback processing will take place. You can spot the functions easily, given that they start with Process-*

And finally Send-FeedBack, which is responsible for writing the validation feedback to the smells.xml output file. Passing the error number (corresponding with the number found within errors.txt) and the statement.

In part 3, the last part of this series, we will be adding some additional rules and see how we can extend the script.

Post Navigation