THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Ben Miller

  • Excited to be speaking at #SQLSATATL next weekend

    I will be on for speaking next week at SQL Saturday #220 #SQLSATATL.

    My sessions will be on SQL Server TDE and SMO Internals for High Performance PowerShell. Both are great fun to present and good information to have.

    Hope to see you there in Alpharetta, GA.


  • VCPASS: Extend your T-SQL Scripting with PowerShell

    Date: November 16, 2011
    Extend your T-SQL Scripting with PowerShell

    Description: I'll be covering some of the different way we can use PowerShell to extend our T-SQL scripting. This session will include a mix of using SMO, .NET classes, and SQLPS to help you understand the power for new scripting technology. At the end we’ll be creating a solution that put together all this techniques.

    Date/Time: 11/16/2011 1:00 PM - 2:00 PM EST
    Registration Link:

    Maximo Trinidad I’m from Puerto Rico, have been working with computers since 1979. Throughout many years, I have worked with various types of databases such as: Dbase, OS/400-DB2, MySQL and MSSQL. Also, provided support to Windows Servers 2003/2008, Windows 7, SQL Servers Applications, Microsoft Virtualization Technologies, and build some Visual Studio solutions. I’m a Microsoft PowerShell MVP since 2008 and MVP SAPIEN Technologies since 2010.

  • Networking: What exactly is it, and how do you use it (appropriately)?

    I have had the opportunity lately to tap into “Marketing” something to get some traction on a Vote for Me scenario.  It made me wonder about Networking and what it really is and what I would use it for.  There are many spammers and others out there who would just send anything and everything out to the masses in the name of Marketing.  I thought about this concept and this contest for a Cute Girls Hairstyle blog, that I am trying to get people driven to. This a neighbor of mine and I think that it is a nice gesture to support their cute family in this contest.

    Now when people think of Marketing, that word, in and of itself can make you think of many things about what it is.  That is why I wanted to write this post about Networking.

    When someone wants to “tap his network”, what kind of things are fair game?  I am part of many networks, LinkedIn, FaceBook, Microsoft MVP, Twitter, Blogging and some others.  Now when you think of the purpose of those networks, are there topics in them that are taboo?  If you opened the floodgate on them, would you have problems with complaints?  What exactly is “tapping the network” in these cases?  Many could successfully argue about LinkedIn being more professional contacts, Facebook being highly social and only that, MVP being Technical and Twitter being a hybrid of all those.  So how do you use them to get traction on something of a cause or contest or whatever, that you feel strongly about helping out with?  Would you cry foul if this came over a professional networking site?  Maybe, maybe not.

    Here are my views, and I encourage you to weigh in on this topic, because it would be interesting to compile thoughts and post them about how this affects the online universe we all live in every day.

    I think that networks are a great tool for finding jobs, friends, viewpoints and information about topics.  Each one seems to have a purpose, and I think it would be out of line to violate the purposes of those networks. A list of purposes are below and what I think they are used for.

    • Professional networks are mainly used to connect people in like professions for a few reasons.  The first reason I think of is needing help with problems that you just need another set of eyes.  This can get you in trouble a little in the IT world as it could be construed as “free consulting”, but could be valuable to both parties, so you could choose to overlook or to scold in this one. 
    • The next reason would be Job Search and contacting about jobs. This is a tricky one because of the recruiting that goes on.  Recruiters get in your network, and yet they are not technical like, at least in most cases.  So do they belong?  The job potential may be one that you could overlook their involvement in your network at that level, but I am not sure that it is the right networking opportunity and should be shied away from.
    • People connections.  This one can be done on many networking sites, and is probably appropriate on the majority of networks.  But at the same time, probably the most tricky, because of the boundaries of the above networks, when your topic is not in harmony with the purpose of the networks.  You can be friends with many in the professional networks, but how would your message be received about this contest above that I have pointed to?  This is tricky, is the best answer for me to this one.
    • Association networking.  You belong to a group or an association and you want to use this network for the above purpose.  This is probably out of line here as well, due to the charter of the organization or association.

    With these thoughts in mind, I just find it frustrating that there is not a better way than Facebook to communicate out something that you want to get people involved in.  Now there is one network that I neglected to put down for a reason, and that is because it deals with a valuable sanctuary that most if not all of us have, and that is the “Email Box”.  Because of all the spam we get out there, it seems that the email box is being protected and is not seen at first as a networking method.  I just have been having a dilemma on how to get the word out to people that I know would vote or participate, but being overly respectful of all the requests for our time that we have in this ever changing world.  As we get more fast-paced, and crazy with time constraints, it becomes easy to have 5+ networks and to not be sure how to use each one of them appropriately.

    For this contest, I have turned to Facebook, Blogging, LinkedIn as a status update instead of a mass message, Twitter as a plea for assistance, and I will turn to my family on Email and certain others that I know would not be offended in this venture.  What do you all think?  Where would this contest be appropriately submitted for help to get votes?  What are networks used for in your opinion?  And a topic for another day, how do you build networks to a level of effectiveness to assist you when you are in need of assistance?  I am sure that I have not done this topic justice, but wanted to get it out there so that I can continue to think about it, but get some weigh-in about your thoughts about “Networking” as it has been very much on the forefront of my mind lately with requests to “tap into my network” for assistance with a myriad of things.

    Thanks for listening.

    Cross-posted from

  • Powershell, SMO and Database Files

    In response to some questions about renaming a physical file for a database, I have 2 versions of Powershell scripts that do this for you, including taking the database offline and then online to make the physical change match the meta-data.

    First, there is an article about this at  This explains that you start by setting the database offline, then alter the database and modify the filename then set it back online.  This particular article does not show the physical renaming of the file, but it is necessary since the ALTER DATABASE MODIFY FILE statement only changes the metadata of the database in master.

    There is a procedure to do this with SQL CLR and TSQL, but I chose to illustrate it in Powershell and SMO (using SQL Provider as well as straight SMO).  The SQLCLR version is by a SQL Server MVP, Ted Krueger (Blog | Twitter) and can be found at this link.

    My version will be based on Powershell and SMO with a smattering of SQL Provider provided in SQL 2008+.

    Code Snippet
    1. Add-PSSnapin SqlServerCmdletSnapin100
    2. Add-PSSnapin SqlServerProviderSnapin100
    3. $servername = "localhost"
    4. $instance = "default"
    5. $dbname = "N2CMS"
    6. $logicalName = "N2CMS"
    7. $NewPath = "c:\program files\microsoft sql server\mssql10_50.mssqlserver\mssql\data"
    8. $NewFilename = "N2CMS_4.mdf"
    9. if(Test-Path "sqlserver:\sql\$servername\$instance\databases\$dbname")
    10. {
    11.         $database = Get-Item("sqlserver:\sql\$servername\$instance\databases\$dbname");
    12.         $fileToRename = $database.FileGroups["PRIMARY"].Files[$logicalName]
    13.         $InitialFilePath = $fileToRename.FileName
    14.         $fileToRename.FileName = "$NewPath\$NewFilename"
    15.         $database.Alter();
    16.         $database.SetOffline()
    17.         Rename-Item -Path $InitialFilePath -NewName "$NewFilename"
    18.         $database.SetOnline()
    19.         Write-Host "File Renamed"
    20. }
    21. else
    22. {
    23.         Write-Host "Database does not exist";
    24. }

    If you notice, I first add the Snapin that is the SQL Provider.  If you already have it loaded in your Powershell Profile, then you can just omit those lines.  If you do not have SQL 2008 objects installed on this machine, then you will notice that the Snapins don’t load either.  In that case you would just use the version below.

    In the SQL Provider version you will see a Powershellism with Test-Path and you are using a path, it just is a SQL provider path that points to the Database and makes sure that it exists.  Compare it to the other version where you are looking at the Server.Databases[$dbname] and then you get the filegroups and then the files.

    This version of the script is purely SMO and PowerShell.  First you load the objects from SQL Server 2005/2008 SMO.

    Code Snippet
    1. # Always
    2. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null
    3. # Only if you don't have SQL 2008 SMO Objects installed
    4. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | out-null
    5. # Only if you have SQL 2008 SMO objects installed
    6. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlEnum") | out-null
    7. $servername = "localhost"
    8. $instance = "default"
    9. $dbname = "N2CMS"
    10. $logicalName = "N2CMS"
    11. $NewPath = "c:\program files\microsoft sql server\mssql10_50.mssqlserver\mssql\data"
    12. $NewFilename = "N2CMS_4.mdf"
    13. $server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $servername
    14. if($server.Databases[$dbname] != $null)
    15. {
    16.         $database = $server.Databases[$dbname];
    17.         $fileToRename = $database.FileGroups["PRIMARY"].Files[$logicalName]
    18.         $InitialFilePath = $fileToRename.FileName
    19.         $fileToRename.FileName = "$NewPath\$NewFilename"
    20.         $database.Alter();
    21.         $database.SetOffline()
    22.         Rename-Item -Path $InitialFilePath -NewName "$NewFilename"
    23.         $database.SetOnline()
    24.         Write-Host "File Renamed"
    25. }
    26. else
    27. {
    28.         Write-Host "Database does not exist";
    29. }

    This version will use the Server object to get the database that you are looking for so that you can get at the files.  The file you are looking for is the $logicalName so that you can get the PhysicalName of the file.  Then using the builtin CmdLets to handle Files operations, a Rename-Item is issued to rename the initial path to the new one.

    Operation order:

    • Rename-Item from old to new

    I did not put these in a function, but they could easily (or not too hard that is) put together using Parameters and make it versatile for other situations, but that is for you to do so that it works for your scenario.

    Love PowerShell!

  • An honor to be among the SQL Server MVPs

    I just found out last night that I was officially awarded SQL MVP.  I am honored to be among those that I respect and admire. 

    I don’t contribute for the recognition and as a former MVP Lead I know the caliber of those that contribute to the SQL Server community, and even those that are not MVP, I am grateful for those of you who have helped me get where I am in SQL Server land.

    I just wanted to say thanks to the SQL Community for the support and also to Microsoft for this award.

    Here’s to a great 2011 and hope to see you MVPs at the Summit in February.

  • PASS Virtual Chapter: Powershell today - Aaron Nelson

    Just a reminder about the Virtual Chapter today at 12:00 Noon Eastern Time we will have a meeting with Aaron Nelson presenting a Grab Bag of Powershell stuff for SQL Server.  The link below is the attendee link.  This is our regularly scheduled program each month, and the website is

    Hope you can make it.  There was standing room only in Aarons SQL PASS presentation in Seattle, so you won't want to miss this if you can make it.


  • Long time SQL Cluster issue resurfaced

    It is interesting as time goes on, what things are found by others that have already been reported, but seem to be ignored.  SQL 2005 Clustering issue with lower case servernames is the topic of this post. 

    I attended a Clustering Class by Russ Kaufmann and during the class I illustrated to him a problem that we had in installing a Windows 2003 Cluster with SQL 2005.  This issue was if you had your physical server name in lowercase, when you installed SQL 2005, it would fail and rollback (partially at that time) the entire install.  I went round and round with a Connect Item and could not get Microsoft to repro this, even though I had reproduced this on demand many, many times.

    Now I find this post from Tara Kizer on blogs.  This post illustrates that when physical servernames are in lowercase there is an issue with the install.  I also found the reference to this post by Microsoft PSS illustrating the problem as well.

    Now the only difference is that my issue was reported in 2007 and these posts are from 2009.  Also my Operating System was Windows 2003 R2 and this is referencing Windows 2008 Server.  It is still amazing that this is not fixed in Windows 2008, but I just had to get this off my chest.


  • Windows Vista client connecting to SQL 2005 Cluster on Windows 2008

    There have been a couple of issues that keep pestering the SQL Server community for those that use Vista as their client. 

    Example: SQL Protocols Blog entry

    Now Microsoft has created a KB on it:

    I have tried and tested a solution that fixes this, and it is related to the KB, that was not available for some reason when I originally had the problem.  So the steps that I took to solve it are as follows:

    • Open Windows Firewall in Control Panel
    • Go to the Exceptions Tab
    • Click on Add Program
    • Find SQL Server Management Studio and click OK
    • Test the connection

    I did this several times with enabling and disabling the firewall and having consistent results that I could now connect to the Windows 2008 Cluster that had SQL 2005 Failover Instances on it.  I then went to a colleagues workstation and it fixed it as well.

  • Centralized ConnectionString manager

    Has anyone any opinions or real world experience in creating a centralized ConnectionString manager and what would be the mechanism to retrieve them?

    We are going down this road so that there is a way to have connection strings in one centralized place to manage them there.  But with the cavaet that we would not want to introduce lessened security on this mechanism, so I would not imagine that storage in a database would necessarily be the place.  But I am not opposed to it.

    Anyway, any ideas?

  • SQL 2005 Encryption with Certificates

    OK, I found out that my age is creeping up on me.  I have implemented some encryption using Certificates in SQL Server and had some interesting experiences I thought I would share.

    I have a Database Master key and planned out my encryption strategy at least for the near future as follows:

             |________SYMMETRIC KEY encrypted by Certificate above.

    So I created my DB Master Key with a Password, then I created my certificate and then created my Symmetric Key with decryption by certificate.  What I found out was that currently we had implemented encryption and decryption by using an external encrypter/decrypter which was function based.  So you would use something like:

    SELECT dbo.fn_EncryptMyString('41111111111111111')

    I would get the character value back with it encrypted.  Then I would be able to decrypt with

    SELECT dbo.fn_DecryptMyString(fieldInDatabase)

    When moving to SQL Server encryption using SYMMETRIC KEY en/decryption, doing it inside a Function is not supported for using a DATABASE MASTER KEY because of the nature of a function in a query.  So you cannot open a Symmetric Key inside a function, you must use Certificates.

    This is what led me to this experience.  First you Create or Open your DATABASE MASTER KEY and then you would create the Certificate with all the information that is necessary to create it:

    CREATE CERTIFICATE NameOfCertificate
    ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'
    WITH SUBJECT = 'This is My Subject',
    EXPIRY_DATE = '10/31/2009';

    When you get done with that, you will want to backup the key (the Private key will be encrypted with the database master key) in order to recover if you lose the DB Master Key. Having the Private Key of the Certificate will allow you to still decrypt the data after you reestablish the DB Master Key in the database. 

    BACKUP CERTIFICATE NameOfCertificate TO FILE = 'c:\storedcerts\NameOfCertificatecert'
    WITH PRIVATE KEY ( FILE = 'c:\storedkeys\NameOfCertificatekey' ,
    ENCRYPTION BY PASSWORD = '997jkhUbhk$w4ez0876hKHJH5gh' );

    This will basically backup the Certificate Private Key to the file named above and you can then restore it to the database after moving the database to another server or whatever.

    This is all well and good, but when testing it as SA or Sysadmin you will always be able to use the Certificate, but as a normal user that does not have DBO privileges or other privileges that give you rights to the certificate, there are 2 permissions that you must set in order to use the Certificate as a non-DBO or higher privilege user.

    I found this nice explanation on MSSQL Tips that is very easy to understand and is the same information as in BOL.
    It is found here.

    The permissions are:

    If you do not grant these permissions, you will just get a NULL value back from either DecryptByCert or EncryptByCert, no error messages or anything, just a NULL.

    So have fun, I certainly will remember again next time I need to create this stuff again, as once you have some pain in forgetting the last step, you tend to stay a little more aware of it in the future occurences.

  • What is a SARG exactly? Training classes and knowledge retention!

    I recently (2 weeks ago) took a class from my friend Kalen Delaney on SQL Server Internals.  It was 5 days and she really packed the material in there.  I really like in depth training because it makes me think harder and I excel faster for some reason in my learning.  Well I have always found that conferences and training classes are great for the week that you are there, but then you get back home, and go back into the office.  Then after the 1st week back, it is easy to lose what was learned.

    I have also found that if I will come home and teach or apply what I have learned in some way, that it will be retained for a greater amount of time, and maybe even never lost.

    So after taking Kalen's class on internals, I remember at the last part of the week, she taught us about SARGs (Search Arguments and how they can affect the usage of indexes).  Recently at work, there was a query that came across my desk and in essence here is what it looked like:

    SELECT DISTINCT o.OrderID, o.PostDate, o.ShipMethod, o.Status, ISNULL(o.CenterID,0) AS CenterID
    FROM dbo.Orders o
    WHERE WarehouseID = 99
    And [Status] = 'Shipped'
          And ISNULL(Iscredit,0) = 0
          And CAST(CONVERT(char(8), ISNULL(PostDate,orderdate),112) AS datetime) BETWEEN '2008-08-24' AND '2008-08-25'
    ORDER BY o.OrderID

    There were indexes on the Date fields and the table has about 3 Million rows in it.  Now when this query is done, it returns 4 rows of the 3 Million, and has a query plan that includes just 3 operators, a Table Scan (84%), a Compute Scalar (4%) and  a Sort (11%).  But the query returns in 30 seconds, and that just won't do.

    [I will insert the query plan image in later]

    So I applied the concept of the SARG that I learned from Kalen to encourage the optimizer to get a better plan and use the indexes.  Here is the modified query.

    SELECT DISTINCT o.OrderID, o.PostDate, o.ShipMethod, o.Status, IsNull(o.CenterID,0) AS CenterID
    FROM dbo.Orders o
    WHERE WarehouseID = 99
        And [Status] = 'Shipped'
    And IsNull(Iscredit,0) = 0
        And (PostDate Between '2008-08-24' and '2008-08-26' OR (PostDate IS NULL AND OrderDate BETWEEN '2008-08-24' AND '2008-08-26'))
    ORDER BY o.OrderID

    Notice that the PostDate between and the OR clause has the PostDate and OrderDate called specifically without the CONVERT or CAST.  The first query had little deterministic content in the date comparisons, but the second could clearly define what it was looking for in a search and could say that it was looking for the PostDate between something or that PostDate was NULL and the Orderdate was between something, so the index could be used. 

    This query plan [I will insert the graphic of it later when I upload it] was one that used Indexes for seeks and may even have been more tunable on the index side, but the query returns in < 1 second and has far less reads that the previous one.

    The STATISTICS IO on the first query looked like this:

    Table 'Orders'. Scan count 1, logical reads 551642, physical reads 786, read-ahead reads 166130.

    The STATISTICS IO on the second query looked like this:

    Table 'Orders'. Scan count 2, logical reads 16, physical reads 6, read-ahead reads 0.

    For me, this is fantastic results and I have to say that it was simpler to spot after learning about SARGs and how the optimizer works with the query that comes in.



  • Finding out who stopped SQL Server

    This is the latest quest of the day.  I am appealing to all those that have a great handle on how Windows events happen and whether or not they are kept or just logged.  If you get an event in the Event Viewer that indicates that "Service Control Manager" stopped SQL Server, there is no indication of the user that did it.

    Is there a way to capture that after the fact? Or in other words, does Windows store that anywhere that you can get to after it has come back up?  Windows has not been restarted, but SQL Server has.

    Any ideas of how to find out who stopped and started SQL Server would be great.

  • The quest for database documentation

    Recently I have had a great need to see inside about 39 servers (Production, Dev, Test, etc) and find out things like

    * How many logins have Sysadmin privileges
    * How many users have the db_owner role
    * How many databases total do we have in the Enterprise
    * How much space is being used in total by Databases and their Log files

    With this in mind, I came across the article in February in SQL Server Magazine about the DBA Repository and have not had enough time to really dive into it.  But now I have had a chance and a great need, so amidst my powershell and SMO work to get the information I want, I have also leaned on this SSIS package to gather this data.

    Those who have not seen this or loaded it for actionable results, it comes with an SSIS package to load the information and a SSRS reporting project that displays some of the data in the tables.  The reporting is interesting, but not yet complete, so I will be venturing there next to get some good reports that can show me some exceptions that are out there and also some of the security concerns.  I will also be tweaking it to allow me to keep a history of the Drive space and the size of the databases to see when they grow, etc. But most of this is for trending of the different statistics kept inside the tables of the database repository.

    I have made some changes for those that use this repository. 

    These include:

    * Added a GroupID field from the sysfiles into the Database_Info table so that I can separate Data and Log groups to know how much space is Datafile and how much is Log file
    * Changed the query for SQL 2000 jobs so that I got the jobs for SQL 2000. For some reason it failed to execute so I change the query now it does not fail
    * Change the Validation to Delayed so that the generic connections for 2005 and 2000 would not try to connect to the servers, when the connection was incorrect because it was missing the server by design
    * Added a few stored procedures and reports for getting some data back the way I wanted to

    I will be updating this package and the reporting package as I go along and will be releasing it to the Author and to the wild with permission from the original author, so stay tuned.  For the information it gathered, and for 39 servers my workstation ran it in 8 minutes for all the data.

    But the other reason for writing this blog entry is that when I put the package into the SSIS store, I ran into something that I did not know about.  When you have a named instance of SQL instead of a default instance on your machine and there is no default instance, expanding MSDB inside a connection to SSIS, causes an error because it is looking for the default instance.  Most DBAs know that when you connect to SSIS you connect to the machine name and not to an instance and that there is ever only 1 instance of SSIS on a machine, so this could be a problem when you want to store a package inside MSDB.

    In the documentation of SQL (BOL), there is hidden a little note on how to get this to work.  In the directory (default place without change on install) C:\Program Files\Microsoft SQL Server\90\DTS\Binn there is a file called MsDtsSrvr.ini.xml.  In this file there is a property that you must change to get it to connect to your instace to get at MSDB.  It is <ServerName>.</ServerName>.  By default it is pointing to a . (period) which is the local default instance.  Because I did not have one, I had to change it to point to my instance .\DESK.  I saved the file and restarted SSIS and I could then expand the MSDB and store the package in the SSIS Package Store.

    The other wierdness I came across is that I wanted to schedule it in a job.  The only option by default is to run the step as SQL Server Agent, which is not a very privileged account on my local machine.  So it would not run anything against other servers, just mine.  So I read up on Credentials and Proxies in SQL 2005 and found out that I could create a Credential with a user that had permissions on the other servers (Windows Permissions) and after creating a credential, I created a proxy in SQL Server Agent and told it to use the Credential.  Then I edited the Step and in the dropdown for the Run As... there was my proxy, and sure enough my job with the SSIS package ran successfully, and I am a happy camper.

  • Cluster bug rears its head once more

    Back in July 07, I found and verified a bug in SQL 2005 and Windows 2003 when clustering SQL Server. From previous cluster setups, I found that if I was given a Windows 2003 R2 x64 Server with SP2+ on it, then the cluster went well no matter if the machine name was lower case or not, but the key to this entire bug is that if you go into Cluster Manager and add the nodes to the Cluster, and you see a lowercase name in the list of cluster nodes, SQL will ALWAYS fail.  I have verified it so many times, that I am convinced.  But this only happens when you are on 2003 with SP1.  You can even verify that something is wrong by finding the registry keys that have the server name in them and see that some are lower and some are upper.

    There is much more to this story and one day I will dialog it all out further than I have, but with Windows 2008 on the horizon and SQL 2008, hopefully these kinds of cases will not find their way into the mainstream.  It must be only our organization because the connect item above has not had much traction.

  • Final DB Scripter using Powershell

    I decided to finish the scripter even though I could not get the Add-Content to work without giving me a multitude of ??? in the file.  I decided that I could script out the Tables, Views and DB and also output the contents of the INSERT statements into 2 files instead of 1, then concat the files together or just use one for the DDL and the other for the DML.  (good separation of Schema and Data huh?) 

    To run this and get both outputs, you can run it like this:  (inside Powershell)

    PS C:\Scripts > ./ScriptDB.ps1 >somefile.txt

    Have a great one and enjoy.

    # ScriptDB.ps1
    # Scripts db, tables, views in database of your choice
    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null
    [System.Reflection.Assembly]::LoadWithPartialName("System.Data") | out-null

    $dbname = "Database Name"
    $filename = "c:\temp\output.txt"
    $serverName = "Server Name"

    $srv = new-object "Microsoft.SqlServer.Management.SMO.Server" $serverName
    $srv.SetDefaultInitFields([Microsoft.SqlServer.Management.SMO.View], "IsSystemObject")

    $db = New-Object "Microsoft.SqlServer.Management.SMO.Database"
    $db = $srv.Databases[$dbname]

    $scr = New-Object "Microsoft.SqlServer.Management.Smo.Scripter"
    $deptype = New-Object "Microsoft.SqlServer.Management.Smo.DependencyType"

    $scr.Server = $srv
    $options = New-Object "Microsoft.SqlServer.Management.SMO.ScriptingOptions"
    $options.DriAll = $True
    $options.ClusteredIndexes = $true
    $options.Default = $true
    $options.DriAll = $true
    $options.Indexes = $true
    $options.IncludeHeaders = $true
    $options.AppendToFile = $false
    $options.FileName = $filename
    $options.ToFileOnly = $true

    $scr.Options = $options
    $options.AppendToFile = $true
    $views = $db.Views | where {$_.IsSystemObject -eq $false}

    $tree = $scr.DiscoverDependencies($db.Tables, $True)

    $depwalker = New-Object "Microsoft.SqlServer.Management.SMO.DependencyWalker"
    $depcoll = $depwalker.WalkDependencies($tree)

    #Using the sp_generate_inserts from the webpage

    $col = $depcoll | foreach {
     "EXEC sp_generate_inserts '" + $_.Urn.GetAttribute("Name") + "'"

    $ds = New-Object "System.Data.DataSet"
    $ds = $db.ExecuteWithResults($col)

    # using the Piping features to go from Tables, to Table, to Rows, to Row and to Column 1 $_[0]
    $ds.Tables | foreach { $_.Rows | foreach {$_[0]} }


More Posts Next page »
Privacy Statement