


Principal component analysis (PCA) is a technique used to emphasize the majority of the variation and bring out strong patterns in a dataset. It is often used to make data easy to explore and visualize. It is closely connected to eigenvectors and eigenvalues. A short definition of the algorithm: PCA uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has the largest possible variance, and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to (i.e., uncorrelated with) the preceding components. The principal components are orthogonal because they are the eigenvectors of the covariance matrix, which is symmetric. Initially, variables used in the analysis form a multidimensional space, or matrix, of dimensionality m, if you use m variables. The following picture shows a twodimensional space. Values of the variables v1 and v2 define cases in this 2D space. Variability of the cases is spread across both source variables approximately equally. Finding principal components means finding new m axes, where m is exactly equal to the number of the source variables. However, these new axes are selected in such way that the most of the variability of the cases is spread over a single new variable, or over a principal component, like shown in the following picture. We can deconstruct the data points matrix into eigenvectors and eigenvalues. Every eigenvector has a corresponding eigenvalue. A eigenvector is a direction of the line and a eigenvalue is a number, telling how much variance there is in the data in that direction, or how spread out the data is on the line. he eigenvector with the highest eigenvalue is therefore the principal component. Here is an example of calculation of eigenvectors and eigenvalues for a simple twodimensional matrix. The interpretation of the principal components is up to you and might be pretty complex. This fact might limit PCA usability for businessoriented problems. PCA is used more in machine learning and statistics than in data mining, which is more end user oriented, and the results thus should be easy understandable. You use the PCA to:  Explore the data to explain the variability;
 Reduce the dimensionality – replace the m variables with n principal components, where n < m, in such a way that preserves the most of the variability;
 Use the residual variability not explained by the PCs for anomaly detection.


With the KMeans algorithm, each object is assigned to exactly one cluster. It is assigned to this cluster with a probability equal to 1.0. It is assigned to all other clusters with a probability equal to 0.0. This is hard clustering. Instead of distance, you can use a probabilistic measure to determine cluster membership. For example, you can cover the objects with bell curves for each dimension with a specific mean and standard deviation. A case is assigned to every cluster with a certain probability. Because clusters can overlap, this is called soft clustering. The ExpectationMaximization (EM) method changes the parameters of the bell curve to improve covering in each iteration. The Expectation  Maximization (EM) Clustering algorithm extends the KMeans paradigm in a different way. Instead of assigning each object to a dedicated cluster, it assigns each object to a cluster according to a weight representing the probability of the membership. In other words, there are no strict boundaries between clusters. Therefore, new means are computed based on weighted measures. The EM algorithm iterates between two steps. In the first step—the "expectation" step—the algorithm calculates the cluster membership of each case (i.e., the probability that a case belongs to a given cluster from the initially defined k number of clusters). In the second step—the "maximization" step—the algorithm uses these cluster memberships to reestimate the models' parameters, such as the location and scale parameters of Gaussian distribution. The algorithm assumes that the data is drawn from a mixture of Gaussian distributions (bell curves). Take a look at the graphics. In the first row, the algorithm initializes the mixture distribution, which is the mixture of several bell curves here. In the second and third rows, the algorithm modifies the mixture distribution based on the data. The iteration stops when it meets the specified stopping criteria—for example, when it reaches a certain likelihoodofimprovement rate between iterations. Step 1: Initializing the mixture distribution Step 2: Modifying the mixture distribution Step 3: Final modification You use the EM Clustering for the same purposes as the KMeans Clustering. In addition, you can search for outliers based on combinations of values of all input variables with the EM algorithm. You check the highest probability of cases over all clusters. The cases where the highest probability is still low do not fit well into any cluster. Said differently, they are not like other cases, and therefore you can assume that they are outliers. See the last figure in this blog post bellow. The green case belongs to the cluster D with probability 0.95, to the cluster C with probability 0.002, to the cluster E with probability 0.0003, and so on. The red case belongs to the cluster C with probability 0.03, to the cluster B with probability 0.02, to the cluster D with probability 0.003, and so on. The highest probability for the red case is still a low value; therefore, this case does not fit well to any of the clusters and thus might represent an outlier. Outliers can also represent potentially fraudulent transactions. EM Clustering is therefore useful also for fraud detection. Finally, you can use EM Clustering for advanced data profiling to find the rows with suspicious combinations of column values.


Hierarchical clustering could be very useful because it is easy to see the optimal number of clusters in a dendrogram and because the dendrogram visualizes the clusters and the process of building of that clusters. However, hierarchical methods don’t scale well. Just imagine how cluttered a dendrogram would be if 10,000 cases would be shown on it. KMeans is a distancebased partitioning algorithm that divides data set in predetermined (“k”) number of clusters around the average location (“mean”). In your mind, you intuitively know how to group people or any other cases. Groups do not need to have an equal number of members. You can do grouping according to one or more attributes. The algorithm comes from geometry. Imagine record space with attributes as dimensions. Each record (case) is uniquely located in space with values of the attributes (variables). The algorithm initially creates k fictitious members and defines them at the means of the clusters. These fictitious cases are also called centroids. The values of the input variables for these centroids could be selected randomly. Some algorithms use also a bit of heuristics and use the marginal distributions of the attributes as a starting point and randomly perturb from there. The algorithm then assigns each record to nearest centroid. This way, you get the initial clusters. When the clusters are defined, the algorithm can calculate the actual centroids of clusters and get new centroids. After the new centroids are calculated, the algorithm reassigns each record to the nearest centroid. Some records jump from cluster to cluster. Now the algorithm can calculate new centroids and then new clusters. The algorithm iterates last two steps until cluster boundaries stop changing. You can stop the iterations when there is less than the minimum number of cases defined as a parameter that can jump from cluster to cluster. Here is a graphical representation of the process. You can see the cases in a twodimensional space. The dark brown cases are the fictitious centroids. The green case is the one that will jump between clusters. After the centroids were selected, the algorithm assigns each case to the nearest centroid. Now we have our three initial clusters. The algorithm can calculate the real centroids of those three clusters. This means that the centroids move. The algorithm has to recalculate the cluster membership. The green case jumps from the middle cluster to the bottom left cluster. In the next iteration, no case jumps from a cluster to a cluster. Therefore, the algorithm can stop. Kmeans clustering scales much better than hierarchical methods. However, it has drawbacks as well. First of all, what is the optimal number of clusters? You can’t know in advance. Therefore, you need to create different models with different number of clusters and than select the one that fits your data the best. The next problem is the meaning of the clusters. There are no labels for the clusters that would be known in advance. Once the model is built, you need to check the distributions of the input variables in each cluster to understand what kind of cases constitute each cluster. Only after this step you can label the clusters.


Clustering is the process of grouping the data into classes or clusters so that objects within a cluster have high similarity in comparison to one another, but are very dissimilar to objects in other clusters. Dissimilarities are assessed based on the attribute values describing the objects. There are a large number of clustering algorithms. The major methods can be classified into the following categories.  Partitioning methods. A partitioning method constructs K partitions of the data, which satisfy the following requirements: (1) each group must contain at least one object and (2) each object must belong to exactly one group. Given the initial K number of partitions to construct, the method creates initial partitions. It then uses an iterative relocation technique that attempts to improve the partitioning by moving objects from one group to another. There are various kinds of criteria for judging the quality of the partitions. Some most popular include the kmeans algorithm, where each cluster is represented by the mean value of the objects in the cluster, and the kmedoids algorithm, where each cluster is represented by one of the objects located near the center of the cluster.
 Hierarchical methods. A hierarchical method creates a hierarchical decomposition of the given set of data objects. These methods are agglomerative or divisive. The agglomerative (bottomup) approach starts with each object forming a separate group. It successively merges the objects or groups close to one another, until all groups are merged into one. The divisive (topdown) approach starts with all the objects in the same cluster. In each successive iteration, a cluster is split up into smaller clusters, until eventually each object is in one cluster or until a termination condition holds.
 Densitybased methods. Methods based on the distance between objects can find only sphericalshaped clusters and encounter difficulty in discovering clusters of arbitrary shapes. So other methods have been developed based on the notion of density. The general idea is to continue growing the given cluster as long as the density (number of objects or data points) in the “neighborhood” exceeds some threshold; that is, for each data point within a given cluster, the neighborhood of a given radius has to contain at least a minimum number of points.
 Modelbased methods. Modelbased methods hypothesize a model for each of the clusters and find the best fit of the data to the given model. A modelbased technique might locate clusters by constructing a density function that reflects the spatial distribution of the data points. Unlike conventional clustering, which primarily identifies groups of like objects, this conceptual clustering goes one step further by also finding characteristic descriptions for each group, where each group represents a concept or a class.
A hierarchical clustering model training typically starts by calculating a distance matrix – a matrix with distances between data points in a multidimensional hyperspace, where each input variable defines one dimension of that hyperspace. Distance measure can be a geometrical distance or some other, more complex measure. A dendrogram is a tree diagram frequently used to illustrate the arrangement of the clusters produced by hierarchical clustering. Dendrograms are also often used in computational biology to illustrate the clustering of genes or samples. The following set of pictures shows the process of building an agglomerative hierarchical clustering dendrogram. Cluster analysis segments a heterogeneous population into a number of more homogenous subgroups or clusters. Typical usage scenarios include:  Discovering distinct groups of customers
 Identifying groups of houses in a city
 In biology, deriving animal and plant taxonomies
 Can even make predictions once the clusters are built and distribution of a target variable in the clusters is calculated.


The Association Rules algorithm is specifically designed for use in market basket analyses. This knowledge can additionally help in identifying crossselling opportunities and in arranging attractive packages of products. This is the most popular algorithm used in web sales. You can even include additional discrete input variables and predict purchases over classes of input variables. Association Rules Basics The algorithm considers each attribute/value pair (such as product/bicycle) as an item. An itemset is a combination of items in a single transaction. The algorithm scans through the dataset trying to find itemsets that tend to appear in many transactions. Then it expresses the combinations of the items as rules (such as “if customers purchase potato chips, they will purchase cola as well”). Often association models work against datasets containing nested tables, such as a customer list followed by a nested purchases table. If a nested table exists in the dataset, each nested key (such as a product in the purchases table) is considered an item. Understanding Measures Besides the itemsets and the rules, the algorithm also return some measures for the itemsets and the rules. Imagine the following transactions:  Transaction 1: Frozen pizza, cola, milk
 Transaction 2: Milk, potato chips
 Transaction 3: Cola, frozen pizza
 Transaction 4: Milk, pretzels
 Transaction 5: Cola, pretzels
The Association Rules measures include:  Support, or frequency, means the number of cases that contain the targeted item or combination of items. Therefore, support is a measure for the itemsets.
 Probability, also known as confidence, is a measure for the rules. The probability of an association rule is the support for the combination divided by the support for the condition. For example, the rule "If a customer purchases cola, then they will purchase potato chips" has a probability of 33%. The support for the combination (potato chips + cola) is 20%, occurring in one of each five transactions. However, the support for the condition (cola) is 60%, occurring in three out of each five transactions. This gives a confidence of 0.2 / 0.6 = 0.33 or 33%.
 Importance is a measure for both, itemsets and rules. When importance is calculated for an itemset, then when importance equals one, the items in the itemset are independent. If importance is greater than one, then the items are positively correlated. If importance is lower than one, then the items are negatively correlated. When importance is calculated for a rule “If {A} then {B}, then the value zero means there is no association between the items. Positive importance means that the probability for the item {B} goes up when the item {A} is in the basket, and negative importance of the rule means that the probability for the item {B} goes down when the tem {A} is in the basket.
Common Business Use Cases You use the Association Rules algorithm for market basket analyses. You can identify crossselling opportunities or arrange attractive packages. This is the most popular algorithm used in web sales. You can even include additional input variables and predict purchases over classes of input variables.


In two days, I am starting my first conference trip for this year. Therefore, it seems to me it is high time to write down my plan for the first semester of this year. Of course, I m adding my food plan for each event:)  SQL Saturday #374 Vienna. On Friday, February 27th, I am having a fullday seminar “Advanced Data Modeling Topics” in Vienna. On Saturday, I am also giving a presentation “Identity Mapping and DeDuplicating”. I am looking forward to the Käsekrainer. When I met it for the first time, I thought it was a bad joke. Our Kranjska sausage, or how Austrians say, the Krainer sausage, is probably the most controlled sausage, the sausage with the best ingredients possible. I was wondering what kind of barbarians would put cheese in it. In addition, Käsekrainer is much cheaper. It consists of pork, veal, cheese, and a fistful of unidentifiable ingredients. The Käsekrainer is probably the food that not acceptable for the highest number of religions and personal beliefs. However, over time, I started to love it:) Käsekrainer mit semmel, senf und kren with my name on it is already waiting for me in Vienna!
 Of course, I don’t want to miss the SQLBits. On Saturday, March 7th, I am giving the presentation “Analysing text with SQL Server 2014”. British cuisine might not be the most famous in the world. Nevertheless, there is a dish I never had an opportunity to taste yet. This is one of three conferences in England I am doing this semester. At least once I want to get the spotted dick.
 In the week of March 23rd, I am returning to London for the DevWeek conference. I am having a seminar “BI with Microsoft Tools: from Enterprise to a Personal Level” and five presentations there (Data Extraction and Transformation with Power Query and M; Data Mining Algorithms Part 1; Data Mining Algorithms Part 2; Introducing R and Azure ML; Visualising Geographic and Temporal Data with Power Map). If I don’t get the spotted dick during SQLBits, I should have enough time during the DevWeek.
 SQL Saturday #376 Budapest. The schedule is not public yet, but I am giving a presentation there. I would not like to miss the halászlé, the Hungarian fish soup.
 And of course, England again – SQL Saturday #372 Exeter. My presentation there is “Analysing Text with SQL Server 2014 and R”. And my last chance for the spotted dick this semester.
 SQL Saturday #369 Lisbon. I am having a seminar “Data Mining Algorithms in SQL Server, Excer, R, and Azure ML” on Thursday, May 14th, and two presentations on Saturday (again, the schedule is not public yet, so I am not revealing the titles of the sessions). Of course, visiting Portugal and not having the pastel de nata is not acceptable.
 May 18th – 20th – NTK, Portorož, Slovenia. One presentation, the title is still a secret, but not my copresenter Milica Medić. Feel free to envy me:) And yes, I am taking her for the horse steak, would not miss this at home.
 SQL Saturday #384 Varna. Again, the schedule is not public yet, but I am speaking there. Love to return to Bulgaria. In addition, this will by my first time in Varna and on the Black Sea coast. Oh, there are so many Bulgarian dishes I want to have again! Definitely баница (greasy pastry deliciousness) and шкембе чорба (tripe soup)!
 SQL Saturday #409 Rheinland. The schedule is still a secret. However, the Rheinischer Sauerbraten is in my plan. With a lot of beer.
 SQL Saturday 419 Bratislava. Finally, the Northern Slovenia, aka Slovakia, is getting its own SQL Saturday. Of course, I cannot miss it. There are not many dishes in Slovakia that would not be available in Southern Slovakia, aka Slovenia. However, the bryndzove halusky (small dumplings made of potato dough with sheep cheese and topped with scrambled bacon) dish is not well known in Ljubljana, so I am having one in Bratislava. But I am definitely refusing to eat any dish with word “kuraci” in its name. Search for the translation of this word from Slovak and from Slovenian, and you will understand why.
Which events are you visiting? Hope we meet at some.


Data mining is the most advanced part of business intelligence. With statistical and other mathematical algorithms, you can automatically discover patterns and rules in your data that are hard to notice with online analytical processing and reporting. However, you need to thoroughly understand how the data mining algorithms work in order to interpret the results correctly. In this blog I am introducing the data mining, and in the following blogs I am unveiling the black box of data mining and explaining how the most popular algorithms work. Data Mining Definition Data mining is a process of exploration and analysis, by automatic or semiautomatic means, of historical data in order to discover patterns and rules, which can be used later on new data for predictions and forecasting. With data mining, you deduce some hidden knowledge by examining, or training, the data. The unit of examination is called a case, which can be interpreted as one appearance of an entity, or a row, in a table. The knowledge is patterns and rules. In the process, you use attributes of a case, which are called variables in data mining terminology. For better understanding, you can compare data mining to OnLine Analytical Processing (OLAP), which is a modeldriven analysis where you build the model in advance. Data mining is a datadriven analysis, where you search for the model. You examine the data with data mining algorithms. There are many alternative names for data mining, such as knowledge discovery in databases (KDD) and predictive analytics. Originally, data mining was not the same as machine learning in that it gives business users insights for actionable decisions; machine learning determines which algorithm performs the best for a specific task. However, nowadays data mining and machine learning are in many cases used as synonyms. The Two Types of Data Mining Data mining techniques are divided into two main classes:  The directed, or supervised approach: You use known examples and apply gleaned information to unknown examples to predict selected target variable(s).
 The undirected, or unsupervised approach: You discover new patterns inside the dataset as a whole.
Some of the most important directed techniques include classification, estimation, and forecasting. Classification means to examine a new case and assign it to a predefined discrete class. Examples are assigning keywords to articles and assigning customers to known segments. Very similar is estimation, where you are trying to estimate a value of a variable of a new case in a continuously defined pool of values. You can, for example, estimate the number of children or the family income. Forecasting is somewhat similar to classification and estimation. The main difference is that you can’t check the forecasted value at the time of the forecast. Of course, you can evaluate it if you just wait long enough. Examples include forecasting which customers will leave in the future, which customers will order additional services, and the sales amount in a specific region at a specific time in the future. The most common undirected techniques are clustering and affinity grouping. An example of clustering is looking through a large number of initially undifferentiated customers and trying to see if they fall into natural groupings. This is a pure example of "undirected data mining" where the user has no preordained agenda and hopes that the data mining tool will reveal some meaningful structure. Affinity grouping is a special kind of clustering that identifies events or transactions that occur simultaneously. A wellknown example of affinity grouping is market basket analysis. Market basket analysis attempts to understand what items are sold together at the same time. Common Business Use Cases Some of the most common business questions that you can answer with data mining include:  What’s the credit risk of this customer?
 Are there any groups of my customers?
 What products do customers tend to buy together?
 How much of a specific product can I sell in the next time period?
 What is the potential number of customers shopping in this store?
 What are the major groups of my webclick customers?
 Is this a spam email?
However, the actual questions you might want to answer with data mining could be by far broader and depend on your imagination only. For an unconventional example, you might use data mining to try to lower the mortality rate in a hospital. Data mining is already widely used in many different applications. Some of the typical usages, along with the most commonly used algorithms for a specific task, include the following:  Crossselling: Widely used for web sales with the Association Rules and Decision Trees algorithms.
 Fraud detection: An important task for banks and credit card issuers, who want to limit the damage that fraud creates, including that experienced by customers and companies. The Clustering and Decision Trees algorithms are commonly used for fraud detection.
 Churn detection: Service providers, including telecommunications, banking, and insurance companies, perform this to detect which of their subscribers are about to leave them in an attempt to prevent it. Any of the directed methods, including the Naive Bayes, Decision Trees, or Neural Network algorithm, is suitable for this task.
 Customer Relationship Management (CRM) applications: Based on knowledge about customers, which you can extract with segmentation using, for example, the Clustering or Decision Trees algorithm.
 Website optimization: To do this, you should know how your website is used. Microsoft developed a special algorithm, the Sequence Clustering algorithm, for this task.
 Forecasting: Nearly any business would like to have some forecasting, in order to prepare better plans and budgets. The Time Series algorithm is specially designed for this task.
A Quick Introduction to the Most Popular Algorithms In order to raise the expectations for the upcoming blogs, I am adding a brief introduction to the most popular data mining algorithms in a condensed way, in a table. Algorithm  Usage  Association Rules  The algorithm used for market basket analysis, this defines an itemset as a combination of items in a single transaction. It then scans the data and counts the number of times the itemsets appear together in transactions. Market basket analysis is useful to detect crossselling opportunities.  Clustering  This groups cases from a dataset into clusters containing similar characteristics. You can use the Clustering method to group your customers for your CRM application to find distinguishable groups of your customers. In addition, you can use it for finding anomalies in your data. If a case does not fit well to any cluster, it is kind of an exception. For example, this might be a fraudulent transaction.  Naïve Bayes  This calculates probabilities for each possible state of the input attribute for every single state of predictable variable. Those probabilities predict the target attribute based on the known input attributes of new cases. The Naïve Bayes algorithm is quite simple; it builds the models quickly. Therefore, it is very suitable as a starting point in your predictive analytics project.  Decision Trees  The most popular DM algorithm, it predicts discrete and continuous variables. It uses the discrete input variables to split the tree into nodes in such a way that each node is more pure in terms of target variable, i.e. each split leads to nodes where a single state of a target variable is represented better than other states.  Regression Trees  For continuous predictable variables, you get a piecemeal multiple linear regression formula with a separate formula in each node of a tree. Discrete input variables are used to split the tree into nodes. A tree that predicts continuous variables is a Regression Tree. Use Regression Trees for estimation of a continuous variable; for example, a bank might use this technique to estimate the family income for a loan applicant.  Linear Regression  Predicts continuous variables, using a single multiple linear regression formula. The input variables must be continuous as well. Linear Regression is a simple case of a Regression Tree, a tree with no splits. Use it for the same purpose as Regression Trees.  Neural Network  This algorithm is from artificial intelligence, but you can use it for predictions as well. Neural networks search for nonlinear functional dependencies by performing nonlinear transformations on the data in layers, from the input layer through hidden layers to the output layer. Because of the multiple nonlinear transformations, neural networks are harder to interpret compared to Decision Trees.  Logistic Regression  As Linear Regression is a simple Regression Tree, a Logistic Regression is a Neural Network without any hidden layers.  Support Vector Machines  Support Vector Machines are supervised learning models with associated learning algorithms that analyse data and recognize patterns, used for classification. A support vector machine constructs a hyper plane or set of hyper planes in a highdimensional space where the input variables define the dimensions. The hyper planes split the data points into discrete groups of the target variable. Support Vector Machines are powerful for some specific classifications, like text and images classifications and handwritten characters recognition.  Sequence Clustering  This searches for clusters based on a model, and not on similarity of cases as Clustering does. The models are defined on sequences of events by using Markov Chains. Typical usage of the Sequence Clustering would be an analysis of your company’s Web site usage, although you can use this algorithm on any sequential data.  Time Series  You can use this algorithm to forecast continuous variables. Time Series many times denotes two different internal algorithms. For shortterm forecasting, AutoRegression Trees (ART) algorithm is used. For longterm prediction, AutoRegressive Integrated Moving Average (ARIMA) algorithm is used.  Conclusion This brief introduction to data mining should give you the idea what you could use it for and an overview which algorithms are appropriate for the business problem you are trying to solve. I guess you also noticed I am not talking about any specific technology here. These most popular data mining algorithms are available in many different products. For example, you can find them in SQL Server Analysis Services, Excel with Data Mining Addins, R, Azure ML, and more. Please learn how to use them with your specific product using the documentation of the product, by reading books that deal with your product, or by visiting a course about the product. I hope you got excited enough to read the upcoming blogs and visit some of my presentations on various conferences.


We are close to the publishing day of the TSQL Querying book. Of course, like always in this series, the main author of the book is Itzik BenGan. This time, besides me, Adam Machanic and Kevin Farlee are the coauthors. The information I want to share now is that you can get a substantial discount if you preorder the book today, Monday, February 16th, 2015. Pearson is running the Presidents Day Event and giving the following discounts for this and some other products:  Buy 1, Save 35%
 Buy 2, Save 50%
 Up to 70% off on featured video titles
You can preorder the book using this link. Once the page opens, just click the President’s Day Sale banner and select our or any other book on sale. Happy querying!


If you plan to upgrade to SQL Server 2014, then this technical guide is a must. With 429 pages, it is a complete book, yet still available as a free download here. It covers all products and features in the SQL Server suite, and upgrades from versions 2005, 2008, 2008 R2, and 2012. It supplements the information available in Books Online. Besides the actual upgrade, the white paper also covers planning and pre and postupgrade tasks. This is the fourth upgrade technical guide I coauthored – all together, I was involved in versions 2008, 2008 R2, 2012, and 2014. This is the complete list of authors: Ron Talmage, Richard Waymire, James Miller, Vivek Tiwari, Ken Spencer, Paul Turley, Danilo Dominici, Dejan Sarka, Johan Åhlén, Nigel Sammy, Allan Hirt, Herbert Albert, Antonio Soto, Régis Baccaro, Milos Radivojević, Jesús Gil, Simran Jindal, Craig Utley, Larry Barnes, Pablo Ahumada. Thanks to everybody for a smooth process of writing and editing!


So the event is over. I think I can say for all three organizers, Mladen Prajdić, Matija Lah, and me, that we are tired now. However, we are extremely satisfied. It was a great event. First few numbers and comparison with SQL Saturday #274, the first SQL Saturday Slovenia event that took place last year.  SQL Saturday #274  SQL Saturday #356  People  135  220  Show rate  ~87%  ~95%  Proposed sessions  40  82  Selected sessions  15  24  Selected speakers  14  23  Countries  12  16  The numbers nearly doubled. We are especially proud of the show rate; with 95%, this is much better than average for a free event, and probably the highest so far for a SQL Saturday. We asked registered attendees to be fair and to unregister if they know they can’t attend the event in order to make room for those from the waiting list. An old Slovenian proverb says “A nice word finds a nice place”, and it works. 36 registered attendees unregistered. Therefore, we have to thank to both, the attendees of the event and those who unregistered. Of course, as always, we also need to thank to all of the speakers, sponsors and volunteers. All volunteers were very helpful; however, I would like to especially point out Saša Mašič. Her work goes well beyond simple volunteering. I must mention also the FRI, the Faculty of Computer and Information Science, where the event was hosted for free. It is also worth mentioning that we are lucky to live in Ljubljana, such a beautiful city with extremely nice inhabitants who like to enjoy good food, hanging around and mingling, and long parties. Because of that we could be sure in advance that both speakers and attendees from other countries would enjoy spending time here also outside the event, that they would feel safe, and get help whenever they would need it. From the organizational perspective, we tried to do our best, and we hope that everything was OK for speakers, sponsors, volunteers, and attendees. Thank you all!


SQL Saturday #356 Slovenia is practically full. OK, actually we have already reached the expected number of registrations (200). We have raised the number to 220, and we are close to that number as well. Therefore, we (the organizers, Matija, Mladen and me) need to ask all of you who are registered and already know that you will not be able to attend: please unregister and make room for those who would like to attend, but did not register yet. And you who would like to register, please do it as soon as possible, in order to get the confirmation immediately or to be at least at the top of the waiting list. We would like to make an appeal to all of you who are registered: please come. Please remember that this conference was made possible because of the speakers, who are using their time and come on their own expenses to give you state of the art presentations, because of the sponsors, who are giving us and financing the venue, the food, the raffle awards, and more, and of course, because of many volunteers who spend their free time to help with the organization. We are also paying a fixed number of meals to the catering company; therefore, we would throw the money away for those who are registered and would not come. In short: all you need to do is to wake up, get out of bed, get into a good mood, and come to the event to get top presentations, good food and meet friends! Thank you all!


We are approaching the PASS SQL Saturday #356 Slovenia event. Today, we published the schedule. With so many submissions, we had a hard time to select only 24 sessions. Unfortunately, we could not accommodate all of the speakers who submitted sessions. However, we decided to invite to speakers dinner also the speakers that were not selected. Therefore, if you submitted some proposals and you can’t find yourself on the schedule, don’t give up. Please join us anyway, and enjoy the dinner (and the party) with other speakers and with us anyway! You will also get a personal invitation. Potential attendees, the conference is filling up quickly. If you want to join the event, please register soon. Finally, potential sponsors, we are still accepting new sponsors.


So this is the final analysis of the speakers and the sessions submitted for the PASS SQL Saturday #356 Slovenia event, December 13th, Ljubljana, Slovenia. Call for speakers closed on October 15th. The number of proposed sessions and speakers is impressive. I am really honored and humbled by the number of foreign speakers that sent their proposals. I simply can’t remember a conference in Slovenia dealing with MS technology with that many top speakers from foreign countries. Thank you all speakers! Definitely an event that is worth visiting.


I am proud and glad I can announce two top preconference seminars at the PASS SQL Saturday #356 Slovenia conference. The speakers and the seminars titles are: Both seminars will take place on Friday, December 12th, in the classrooms of our sponsor Kompas Xnet. The price for a seminar is € 149, with early bird price at € 119. Early bird price is valid until October 31st. I am also using this opportunity to explain how and why we decided for these two seminars. The decision was made by the conference organizers, Matija Lah, Mladen Prajdič, and Dejan Sarka. There was a lot of discussion in different social networks about PASS Summit preconference seminars lately. If you have any objections for our seminars, please do not start big discussions in public; please tell them to the three of us directly. First of all, unlike at the PASS Summit seminars, the speakers are not going to earn big money here, and therefore it is not really worth spending much time and energy on our decision. We think that any of the speakers who sent proposals for our SQL Saturday could present a top quality seminar. We would like to enable seminars for every speaker that wants to deliver one. However, in a small country, we will have already hard time to fill up the two seminar we have currently. Our intention is to reimburse at least part of the money the speakers spent on their own for travelling expenses and accommodation. In our opinion, it makes sense to do this for the speakers that spent the most for the travelling. Coming here from USA is expensive, and it also takes three days in both directions. That’s why we decided to organize the seminars for the first two speakers from USA. Of course, this is not the last event. If everything goes well with SQL Saturday #356 and with the seminars, we will definitely try to organize more events in the future, and invite more speaker to deliver a seminar as well. Thank you for understanding!


For a change, I am posting a blog in Slovenian language. I am posting the details about PASS SQL Saturday #356 Slovenia. Of course, everybody is invited to attend the conference, submit proposals, or even join us as a sponsor. The vast majority of presentations will be in English language anyway. So here are the details for my countrymen. Slovenska skupnost SQL Server ne miruje. Pripravljamo nov dogodek, že drugič bo to PASS SQL Saturday. Tokrat bomo dogodek organizirali v soboto, 13. decembra, in sicer v prostorih FRI, Ljubljana (http://www.sqlsaturday.com/356/eventhome.aspx). To bo tudi edini dogodek v drugi polovici letošnjega leta v Sloveniji, kjer bodo predavanja posvečena strežniku SQL Server. Ideja za dogodke pod skupnim imenom PASS SQL Saturday se je porodila v svetovni skupnosti uporabnikov sistem SQL Server, v skupnosti Professional Association for SQL Server, oziroma PASS (http://www.sqlpass.org). Ti dogodki so nekakšen odgovor na ekonomsko in predvsem na krizo izobraževanja. Pomanjkanje vlaganja v izobraževanje je namreč vsesplošen svetovni problem. V SQL Server skupnosti smo se odločili poskusiti ponuditi uporabnikom strežnika SQL Server vsaj nekaj, torej vsaj brezplačen dogodek. Kako to poteka? Prvi del enačbe predstavljamo predavatelji. Ne le, da predavanja na SQL Saturday niso plačana; vsak predavatelj si celo sam krije vse stroške. Seveda predavatelj obenem porablja svoj prosti čas. V osnovi gre za enostavno načelo »jaz tebi, ti meni«. Preden sami dodate, kaj jaz tebi in kaj ti meni, naj to razložim. Slovenski predavatelji, predvsem slovenski MVPji za SQL Server, pogosto predavamo na podobnih dogodkih v tujini. Tako nam ni težko pridobiti tujih predavateljev za naš dogodek. Vsi skupaj tako lahko omogočimo vsem lokalnim skupnostim vrhunska predavanja. Drugi del enačbe so sponzorji. Dogodki so sicer precej poceni, saj, kot sem že omenil, ni stroškov s predavatelji. Potrebujemo predavalnice ter nekaj hrane in pijače. Nekaj denarja primakne PASS, predvsem pa je vedno zraven velika podpora s strani Microsofta. Naj še posebej izpostavim podporo v Sloveniji. Ko smo lansko leto prvič organizirali dogodek, smo šli na resnični minimum stroškov. Poskusili smo najti sponzorja, ki bi nam omogočil prostor brezplačno. Ko smo razložili, za kakšen dogodek gre, najprej na sestanku SLODUG, potem pa še osebno, je bil odziv neverjeten. V dveh dneh smo dobili štiri ponudbe za brezplačen prostor! Še več, tudi pri hrani je ponudnik šel v posel praktično brez dobička; tako dobro kosilo za tako ceno, kot smo jo imeli lani, bo težko ponoviti. Tretji del enačbe ste udeleženci. Lanskoletni odziv je presegel naša pričakovanja. Pa ne gre zgolj za pasivne udeležence; dobesedno otepati smo se morali prostovoljcev, ki so želeli kakorkoli pomagati. Še več, tudi razmerje med tistimi, ki so prišli, in količino prijav, je bilo najboljše na svetu. Drugače rečeno, osip je bil veliko pod svetovnim povprečjem. To je res nekaj vredno, ko vidiš, da ljudje z navdušenjem in s spoštovanjem sprejmejo dogodek. Naj se za hip povrnem na prvi del enačbe, na predavatelje. SQL Saturday ni namenjen le uveljavljenim predavateljem; dobrodošli so tudi začetniki, tudi tisti, ki bi radi stopili na pot predavanj. Če mislite, da lahko začnete delati kariero na tem področju, se registrirajte kot PASS uporabnik, prijavite na spletno stran našega dogodka (http://www.sqlsaturday.com/356/callforspeakers.aspx), ter pošljite predloge za predavanja. Svetujemo, da pošljete predloge in tudi predavate v angleškem jeziku. Le tako lahko računate tudi na morebitni prodor v tujino. Letos bomo poskusili vpeljati več novosti. Prvo predstavljata dva predkonferenčna seminarja, ki bosta v petek, 12. decembra, v prostorih Microsoft Slovenija. Seminarja ne bosta brezplačna, bo pa cena zelo razumna. En seminar bo posvečen relacijskemu delu, en pa poslovni inteligenci. Namen seminarjev je predvsem en: omogočiti vsaj delno povrnitev stroškov tistim predavateljem, ki pridejo resnično od daleč. Taki predavatelji ne samo, da plačajo veliko za letalske vozovnice, ampak tudi porabijo skupaj kar tri dni za pot k nam in nazaj domov. Druga novost je četrti sklop predavanj, ki bo odprt za sorodne tehnologije, torej tehnologije, ki uporabljajo ali zlorabljajo SQL Server, kot na primer .NET in SharePoint Server. Tretja novost bo, če bodo sponzorji zainteresirani, kratka predstavitvena predavanja sponzorjev. Naj še omenim, da smo letos začeli s še enim, prav posebnim in slovensko izvirnim dogodkom. To je bil junijski SQL piknik, ki je bil ravno tako izjemno lepo sprejet, čeprav je šlo za bistveno manjši dogodek. Upamo, da bo tudi ta dogodek postal tradicionalen. Vsekakor načrtujemo SQL piknik ponoviti prihodnje leto. Kaj lahko storite vi? Veliko. Predvsem se prijavite. Prijavite se kot udeleženci, prijavite se kot prostovoljci, prijavite se na konferenco, prijavite se na seminar. Lani nas je bilo skupaj 150. Naj nas bo letos 200. Pomagate nam lahko tudi tako, da najdete sponzorje. Morda je ravno vaše podjetje zainteresirano za predstavitev na tej konferenci? Še enkrat – to bo edini dogodek v preostanku leta, kjer bo zbrana bolj ali manj celotna skupnost uporabnikov strežnika SQL Server. Pa še nekaj – tako kot lani, tudi letos se dogodka res udeležite. S tem boste pokazali največ spoštovanja predavateljem, sponzorjem in organizatorjem. Hvala vnaprej.




