Business Intelligence, Data, Analytics, Analysis, Mining, Models, modeling, algorithm, SSAS, SSIS, SSRS, MDX, DMX, Microsoft, SAS, SAP, Jaspersoft, Pentaho, Oracle, IBM, DB2, Informatica, Time Series, Dimensions, Facts, Measures, schemas, time, region, spatial, map, customer
Google Analytics
tirsdag 6. november 2012
Big Data (Social Media) crucial to US Elections
CNN's Atika Shubert explains how people have used social media during the U.S. presidential campaign.
torsdag 18. oktober 2012
Blog Series: Creating BI Solutions with BISM Tabular (Part 1 of 4)
Guest blogger Dan Clark is a senior BI consultant for Pragmatic Works. As a member of the Pragmatic Works SEAL Team (Special Engagements and Learning) he is focused on learning new BI technologies and training others how to best implement the technology. Dan has published several books and numerous articles on .NET programming and BI development. He is a regular speaker at various developer/database conferences and user group meetings, and enjoys interacting with the Microsoft developer and database communities.
Introduction
Business Intelligence, the process of analyzing business data to support better decision-making, has become a necessity for most businesses in today’s competitive environment. In order to bring BI to small and mid-sized companies, there needs to be a set of affordable and easy-to-use tools at each company’s disposal. Microsoft has long had a vision and commitment to bringing the power of BI to the masses, and creating a set of tools and technologies to allow for self-service BI.In order to realize this vision, Microsoft introduced the Business Intelligence Semantic Model (BISM), which supports two models, the traditional multidimensional model and a new tabular model. The tabular model is based on a relational table model that is familiar to DBAs, developers, and power users. In addition, Microsoft has created a new query language to query the BISM Tabular model. This language, Data Analysis Expression Language (DAX), is similar to the syntax used in Excel calculations and should be familiar to Excel power users.
The goal of this blog series is to expose you to the process needed to create a BISM Tabular Model in SQL Server 2012 and deploy it to an Analysis Server where it can be exposed to client applications. The first part of this series covers setting up a tabular model project and importing data into the tabular model. Part II will look at table relations, implementing calculations, and creating measures with DAX. Part III will cover implementing semi-additive measures and securing the model, and Part IV will conclude the series by looking at how you deploy and connect to the tabular model from a client application.
Part I
What Makes Tabular Mode Different?
SSAS supports both traditional multidimensional models using the MOLAP storage engine and tabular models using the new xVelocity engine. However, the same instance of SSAS cannot support both types of projects. If you need to support both multidimensional and tabular projects you must install a separate instance for each.While the traditional MOLAP engine is optimized for multi-dimensional modeling and uses pre-built aggregation stored on disk, the xVelocity engine takes a different approach. XVelocity is an in-memory column store engine that combines data compression and scanning algorithms to deliver fast performance with no need for pre-built aggregation. In addition, since all aggregation occurs on the fly in memory, it avoids costly I/O reads to disk storage.
When to Use SSAS Tabular Model vs.Just Publishing of a PowerPivot Workbook?
Microsoft now offers several options within the BI platform (Figure 1), each targeted for the different types of BI analysis, from personal BI to corporate BI. As you move from personal BI implementation to team and corporate BI implementation, Tabular Models in SSAS provide a more robust solution in terms of scale, management, security, and development tools. For example, tabular models in SSAS do not have a hard upper data size limit. Tabular models also have partitioning, which is used to manage the processing of large data volumes. In terms of security PowerPivot is limited to the workbook level while tabular models hosted in SSAS support row level and dynamic security.
Setting up the Development Environment
In order to create tabular data models, you need to install SQL Server Data Tools (formerly known as Business Intelligence Development Studio). You will also need an instance of SQL Server Analysis Services 2012 in tabular mode available to host the tabular model while it is being developed.
Installing SSAS 2012 is essentially the same whether you want to use the Multidimensional Mode (the default) or Tabular Mode. During the install, the wizard will ask which mode you want to install (Figure 2). Choose the Tabular Mode to install the xVelocity engine on the server.
Later in the installation process you are asked which features you want to install. Make sure you select the SQL Server Data Tools (Figure 3). Be aware, it still has the old name, Business Intelligence Development Studio, in this version of SQL Server 2012.
Creating a Tabular Model Project
In order to create a tabular model project, you need to launch an instance of SQL Server Data Tools. In the New Project dialog box, under Installed Templates, you select the Business Intelligence templates. Under the Analysis Services templates, you should see the Analysis Services Tabular Project template (Figure 4).Importing Data
You can import data into a tabular model project from a variety of sources including relational databases, multidimensional cubes, text files, and data feeds. Under the Model menu and click “Import From Data Source”. This launches the Table Import Wizard (Figure 5).The Table Import Wizard guides you through the steps necessary to import the data. First, choose a data source and create a connection. The connection information required is determined by the type of connection used. Figure 6 shows the dialog for connecting to a SQL Server database. Use the Table Import Wizard to connect to the AdventureWorksDW2008R2 database (available at http://msftdbprodsamples.codeplex.com).
After entering security credentials, you get the option of selecting data from a list of tables and views or entering a query to select the data. By selecting the tables and views option, you can select tables to import, automatically select related tables, and provide filters for the data import (Figure 7).
Selecting the query option allows you to create your own queries to retrieve data from tables, views, or stored procedures (Figure 8). You can also launch a pretty handy query designer to help construct your queries.
Using the Table Import Wizard, select the tables and fields shown in Figure 9.
After the data loads under the Model menu, select ModelView –> DiagramView. You should see a tabular model similar to the one shown in Figure 10. Save the project for use in Part II of this series.
Conclusion
In this first portion of the four part series, you have seen how to create tabular model projects in SQL Server Data Tools. You also saw how to use the Data Import Wizard to import data into the tabular model. You should save this project for use in part two of this series where you will implement calculations and create measures with DAX.
Etiketter:
2012,
BISM,
Business Intelligence,
data,
data import,
dax,
excel,
mdx,
Microsoft,
model,
molap,
powerpivot,
semantics,
server,
sql,
ssas,
tabular
mandag 15. oktober 2012
An Introduction to Microsoft’s Business Intelligence Stack
By Angel Abundez
Introduction
Business Intelligence (or BI) is the visual display of key business information to business users that need to do their jobs. Microsoft’s BI offerings have gone through leaps and bounds since their initial inclusion in SQL Server 2000. The reason I like Microsoft’s stack so much is because it’s proven to give companies the flexibility, performance and scaling they need in a BI platform at the most competitive price. Of course, a picture is worth a thousand words. From the looks of Gartner’s 2011 Magic Quadrant for Business Intelligence Platforms, you can tell clearly that Microsoft is leading the heap.
Some reading this may already be familiar with BI vendors such as Tableau or QlikTech. Perhaps the reporting capabilities out of your ERP/financial system reports are good enough. Without attacking any of the tools you use right now, this article is meant to educate you on the Microsoft BI platform from a practical sense. It’s important to understand why customers rated it so high on its “ability to execute” in the magic quadrant above. Gartner also mentions that low licensing cost smake Microsoft’s BI platform very competitive as well.
To show you just how valuable the Microsoft BI platform can be, I thought I would tell you a story.
…In a Galaxy Far, Far Away
Once upon a time, Company X provided professional services and manufacturing to a variety of industries. They had experienced healthy growth in recent years with a few remote offices turning up around the United States. The time came when the company’s employee count increased, the amount of projects also increased, people were busy and the shop was full of work. Life had to be good, right? Well, not exactly.
Deliveries were not being met and Company X was failing to hit their revenue projections. Management scrambled to get the data needed to investigate as to why costs were high. Was utilization of their employees too low? How did management not know this issue was brooding?
To show you just how valuable the Microsoft BI platform can be, I thought I would tell you a story.
…In a Galaxy Far, Far Away
Once upon a time, Company X provided professional services and manufacturing to a variety of industries. They had experienced healthy growth in recent years with a few remote offices turning up around the United States. The time came when the company’s employee count increased, the amount of projects also increased, people were busy and the shop was full of work. Life had to be good, right? Well, not exactly.
Deliveries were not being met and Company X was failing to hit their revenue projections. Management scrambled to get the data needed to investigate as to why costs were high. Was utilization of their employees too low? How did management not know this issue was brooding?
Realizing that users needed accurate information faster, the company piloted the use of SQL Server Reporting Services (SSRS). They developed reports from their two core systems: Finance and Project Management. Together, these two systems were to provide the outlook for the entire company, both actual and forecasted.
The investment proved to be useful, as awareness of company issues was become apparent. Business users’ reporting needs grew, the knowledgebase of the source systems was growing and the web based portal they called “Report Manager” was handling most, if not all, of the data distribution. SSRS also includes features such as Data-driven Subscriptions and exports to Excel. One developer even used T-SQL behind an Access database to automate the generation of a Budget spreadsheet complete with Actuals and Current Estimates, also known as EACs.
The investment proved to be useful, as awareness of company issues was become apparent. Business users’ reporting needs grew, the knowledgebase of the source systems was growing and the web based portal they called “Report Manager” was handling most, if not all, of the data distribution. SSRS also includes features such as Data-driven Subscriptions and exports to Excel. One developer even used T-SQL behind an Access database to automate the generation of a Budget spreadsheet complete with Actuals and Current Estimates, also known as EACs.
Life at the company was getting better. But, upper management was stuck because forecasts still weren’t accurate enough. Additionally, management needed a view of the aggregate costs and revenue across all projects, not just the multi-million dollar projects.
Company X’s vice president had developed an awesome model using Excel by creating a mashup of actual and forecast data to analyze cash flow for a few projects. However, the process was cumbersome, as it required pulling data from the source systems and into Excel. Then, employees had to further manipulate the data over six worksheets to come up with a summary tab for one project. With more than 60 projects to forecast, this method would not do.
Luckily, Microsoft answers this need with SQL Server Integration Services (SSIS) to Extract, Transform, and Load the data into a separate table in the database for the end analysis. No more multiple-tab worksheets! More importantly, the 60 projects were ready to be analyzed.
SSIS’ core purpose is to move data and do something with it. This is probably my favorite tool in the SQL Server BI Stack because it is very visual and you can see where your data is going. Once the ETL is developed, you can also schedule it to run as often as you need updated information. It has many other uses, such as import/export to Excel, emailing attachments, looping through files, performing maintenance on the server, changing the tires on your car. Anything! (Ok, I’ve gone too far).
Company X’s vice president had developed an awesome model using Excel by creating a mashup of actual and forecast data to analyze cash flow for a few projects. However, the process was cumbersome, as it required pulling data from the source systems and into Excel. Then, employees had to further manipulate the data over six worksheets to come up with a summary tab for one project. With more than 60 projects to forecast, this method would not do.
Luckily, Microsoft answers this need with SQL Server Integration Services (SSIS) to Extract, Transform, and Load the data into a separate table in the database for the end analysis. No more multiple-tab worksheets! More importantly, the 60 projects were ready to be analyzed.
SSIS’ core purpose is to move data and do something with it. This is probably my favorite tool in the SQL Server BI Stack because it is very visual and you can see where your data is going. Once the ETL is developed, you can also schedule it to run as often as you need updated information. It has many other uses, such as import/export to Excel, emailing attachments, looping through files, performing maintenance on the server, changing the tires on your car. Anything! (Ok, I’ve gone too far).
With a couple SSIS packages and SSRS reports later, upper management had its company-wide forecast table being updated daily! This allowed the business analysts to further refine the project forecasts reports, check for accuracy and ensure they get the data they want. Soon after, more summary reports were being developed.
Life was really good, right?
Yes, but the reports started running slow. Users were using the reports more frequently. Plus, new report requests were now coming in almost daily. Common requests were to slice the data by different entities such as Resources, Departments, Project Types and Offices. There was some confusion on which SQL Server feature to use to tackle this problem.
The company had to determine exactly how many users needed to analyze the data. If the answer was one or two, then PowerPivot for Excel 2010 would have been a nice fit. It answers the business analysts’ need to pull large volumes of data into a spreadsheet to slice and dice, complete with slicers that are graphical filters, which allow dynamic changes to your reports.
However, Company X had multiple users who needed to access big data; therefore, the best option was SQL Server Analysis Services (SSAS) to create OLAP databases, also known as Cubes. These OLAP databases are high performance database structures that aggregate, slice-and-dice and organize your dimensions with hierarchies, allowing you to drill down until you get to the data you want. On top of all of that, it is really fast.
Life was really good, right?
Yes, but the reports started running slow. Users were using the reports more frequently. Plus, new report requests were now coming in almost daily. Common requests were to slice the data by different entities such as Resources, Departments, Project Types and Offices. There was some confusion on which SQL Server feature to use to tackle this problem.
The company had to determine exactly how many users needed to analyze the data. If the answer was one or two, then PowerPivot for Excel 2010 would have been a nice fit. It answers the business analysts’ need to pull large volumes of data into a spreadsheet to slice and dice, complete with slicers that are graphical filters, which allow dynamic changes to your reports.
However, Company X had multiple users who needed to access big data; therefore, the best option was SQL Server Analysis Services (SSAS) to create OLAP databases, also known as Cubes. These OLAP databases are high performance database structures that aggregate, slice-and-dice and organize your dimensions with hierarchies, allowing you to drill down until you get to the data you want. On top of all of that, it is really fast.
Conclusion
In short, Microsoft’s BI stack extends the capabilities of SQL Server to new heights. If you own a license, don’t be afraid to use these exciting tools.
mandag 8. oktober 2012
MDX Studion Online
Mosha Pasumansky, father of MDX
Mosha Pasumansky is one of the inventors of the MultiDimensional eXpressions (MDX) language, a query language for online analytical processing (OLAP) databases. Pasumansky is also one of the architects of the Microsoft Analysis Services, and an OLAP expert. Mosha Pasumansky is well known in the OLAP community for his Microsoft OLAP information website which contains a collection of technical articles and other resources related to Microsoft OLAP and Analysis Services. He also has a blog dedicated to MDX and Analysis Services. He spoke at Microsoft conferences such as TechEd and PASS, and he published the book Fast Track to MDX. As of 29 December 2009, Mr. Pasumansky had shifted his focus[1] to Bing, the Microsoft Search Engine, and is no longer maintaining his active stewardship of the BI Community. We are going to miss him and his articles regarding OLAP, MDX and Business Intelligence in general.
Source Wikipedia
This is an online version of the MDX Studio product build by Mosha. The full version can be downloaded from http://www.mosha.com/msolap/mdxstudio.htm
For discussion, bug reports, feature suggestions etc - please visit our blogg here. Here is the link to MDX Studio Online: http://mdx.mosha.com/default.aspx
Mosha Pasumansky is one of the inventors of the MultiDimensional eXpressions (MDX) language, a query language for online analytical processing (OLAP) databases. Pasumansky is also one of the architects of the Microsoft Analysis Services, and an OLAP expert. Mosha Pasumansky is well known in the OLAP community for his Microsoft OLAP information website which contains a collection of technical articles and other resources related to Microsoft OLAP and Analysis Services. He also has a blog dedicated to MDX and Analysis Services. He spoke at Microsoft conferences such as TechEd and PASS, and he published the book Fast Track to MDX. As of 29 December 2009, Mr. Pasumansky had shifted his focus[1] to Bing, the Microsoft Search Engine, and is no longer maintaining his active stewardship of the BI Community. We are going to miss him and his articles regarding OLAP, MDX and Business Intelligence in general.
Source Wikipedia
This is an online version of the MDX Studio product build by Mosha. The full version can be downloaded from http://www.mosha.com/msolap/mdxstudio.htm
For discussion, bug reports, feature suggestions etc - please visit our blogg here. Here is the link to MDX Studio Online: http://mdx.mosha.com/default.aspx
Etiketter:
analysis,
business,
dmx,
engine,
integration,
intelligence,
mdx,
Microsoft,
mosha,
olap,
pass,
pasumansky,
Reporting,
search,
services,
ssas,
teched
torsdag 4. oktober 2012
Predictive Analytics
Source: WIKIPEDIA
Predictive analytics encompasses a variety of techniques from statistics, data mining and game theory that analyze current and historical facts to make predictions about future events.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions.Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields.
One of the most well-known applications is credit scoring, which is used throughout financial services. Scoring models process a customer’s credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time. A well-known example would be the FICO score.
Predictive modeling can also be used to detect financial statement fraud in companies, allowing auditors to gauge a company's relative risk, and to increase substantive audit procedures as needed.
The Internal Revenue Service (IRS) of the United States also uses predictive analytics to try to locate tax fraud.
The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters if and only if the Gauss-Markov assumptions are satisfied.
Once the model has been estimated we would be interested to know if the predictor variables belong in the model – i.e. is the estimate of each variable †™s contribution reliable? To do this we can check the statistical significance of the model’s coefficients which can be measured using the t-statistic. This amounts to testing whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variables can be assessed by using the R² statistic. It measures predictive power of the model i.e. the proportion of the total variation in the dependent variable that is “explained†(accounted for) by variation in the independent variables.
In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model (See Allison’s Logistic Regression for more information on the theory of Logistic Regression).
The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model (analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the –.
Probit regressionProbit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different. Probit models are popular in social sciences like economics.
A good way to understand the key difference between probit and logit models, is to assume that there is a latent variable z.
We do not observe z but instead observe y which takes the value 0 or 1. In the logit model we assume that y follows a logistic distribution. In the probit model we assume that y follows a standard normal distribution. Note that in social sciences (example economics), probit is often used to model situations where the observed variable y is continuous but takes values between 0 and 1.
For practical purposes the only reasons for choosing the probit model over the logistic model would be:
- There is a strong belief that the underlying distribution is normal
- The actual event is not a binary outcome (e.g. Bankrupt/not bankrupt) but a proportion (e.g. Proportion of population at different debt levels).
Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model which is the cornerstone of stationary time series analysis. ARIMA (autoregressive integrated moving average models) on the other hand are used to describe non-stationary time series. Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance.
Box and Jenkins proposed a three stage methodology which includes: model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit.
Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data. Hence the normality assumption of regression models is violated.
Classification and regression trees (CART) is a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.
Once a rule is selected and splits a node into two, the same process is applied to each “child†node (i.e. it is a recursive procedure)
Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data is split as much as possible and then the tree is later pruned.)
Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.
An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines.
In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots. Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs.
Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are two types of training in neural networks used by different networks, supervised and unsupervised training, with supervised being the most common one.
Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc.
Geospatial predictive modeling: Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor random in distribution – there are spatial environment factors (infrastructure, sociocultural, topographic, etc.) that constrain and influence where the locations of events occur. Geospatial predictive modeling attempts to describe those constraints and influences by spatially correlating occurrences of historical geospatial locations with environmental factors that represent those constraints and influences. Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence.
L. Devroye, L. Györfi, G. Lugosi (1996). A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag.
John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN 1-55738-811-3.
Agresti, Alan (2002). Categorical Data Analysis. Hoboken: John Wiley and Sons. ISBN 0-471-36093-7.
Enders, Walter (2004). Applied Time Series Econometrics. Hoboken: John Wiley and Sons. ISBN 052183919X.
Greene, William (2000). Econometric Analysis. Prentice Hall. ISBN 0-13-013297-7.
Mitchell, Tom (1997). Machine Learning. New York: McGraw-Hill. ISBN 0-07-042807-7.
Tukey, John (1977). Exploratory Data Analysis. New York: Addison-Wesley. ISBN 0201076160.
Guidère, Mathieu; Howard N, Sh. Argamon (2009). Rich Language Analysis for Counterterrrorism. Berlin
onsdag 3. oktober 2012
SSAS 2005: OLAP Cube Performance Tuning Lessons
Intro
A recent project has forced me (which is a good thing) to learn both the internals of SSAS 2005 as well as various performance tuning techniques to get maximum performance out of the OLAP server. It goes without saying that the grain of both your underlying data warehouse's Dimensions & Facts will drive how large your cubes are (total cube space). It also should be a given that Processing Time & Query (MDX) Execution Time usually compete with one another. Given the same grain of a model, the more Grain Data, Indexing, and Aggreggations you process upfront should generally result in a more performant end-user experience (but not always). And while ETL & Cube Processing time is of importance, in the real-world it is the end-user experience (capability and performance) which drives the DW/BI solution's adoption!
Throw-out unused Attributes/Optimizing Attributes/Leverage Member Properties The more dimensional attributes you create infers a larger cubespace, which also means more potential aggregations and indexes. Take the time to review with your clients the proposed set of attributes and be sure they all provide value as a 1st class Dimension Attribute. Also, if you find attributes are used primarily for informative purposes only consider leveraging Member Properties instead of creating an entire Dimension Attribute. Also, the surrogate key for your dimensions almost never add business value, delete those attributes and assign the keyColumns of your dimension's grain member (ie it's lowest level) attribute to the surrogate key. If an attribute participates in a natural hierarchy but is not useful as a stand-alone attribute hierarchy you should disable it's hierarchy via the AttributeHierarchyEnabled setting. Finally, be aware that if you have a 'deep' dimension (ie like 19 million members) at its lowest grain, any additional attributes you add will incur much overhead as they have a much higher degree of cardinality.
Set Partition Slices The question of whether or not you must explicitly set a partition's SLICE property is clearly documented incorrectly in SQL Server 2005 Books Online (BOL). For all but the simplest partition designs you should consider setting the SLICE property to match the source property (ie the dataset definitions should match across both properties). For those who do not know, a partition's SLICE is useful for query execution purposes, the SLICE tells the formula|storage engine which partition(s) hold the data that is being requested of it. Please see resources section below for more information on this.
Optimizing Attribute Relationships Attribute relationships are the developer's mechanism to inform the OLAP server of the relation between attributes. Just like Fact Tables (measure groups) relate to dimension in various ways (Regular, Referenced, Fact, Many-to-Many), dimension attributes can relate to one another in various forms (One-to-One or One-to-Many).Also, you can set the RelationshipType to Flexible or Rigid. If you know your member's change over time (ie reclass), make sure to leave this setting as Flexible, otherwise set it to Rigid. Take the time to thoroughly review your attribute relationships and ensure that both represent their natural hierarchy and are optimized!
Scalable Measure Group Partitioning & Aggregation Strategies This is one of the better known techniques but it is still of utmost importance. Make sure to design your measure group's partitions to optimize their performance (both processing and query execution). If your system has a 'rolling window' requirement also account for this in your ETL design/framework. You should almost always partition your measure groups by the DATE_KEY and match the underlying relational data warehouse (RDW) partitioning scheme. The basics of this is your 'hot' (the current period) partition should be optimized for query-execution time via setting a different Aggregation Design as opposed to the 'colder' (older) partitions. Also, if your main storage device (ie SAN usually) cannot hold all of your required data, consider leveraging Remote Partitions to offload the extreme 'cold' partitions to slower storage.
Continuously Tune Aggregations Based On Usage Most documentation in the community clearly states the order of creating effective aggregations is to first leverage the Aggregation Design Wizard, enable the Query Log, and then re-tune the aggregations using the Usage-Based Tuning Wizard. What is not mentioned (near enough anyway) is to continuously retune your aggregations using a refreshed Query Log using the Usage-Based Tuning Wizard. By doing so you are ensuring your aggregations are periodically revised based up recent, real-world usage of your cubes.
Warming the Cache Another well known technique...by issuing real-world MDX queries onStartUp of the MSOLAP service your cube's cache will be pre-optimized.
Be Mindful of Many-to-Many Dimensions Performance Implications While M:M dimensions are a powerful feature of SSAS 2005, that power comes at the cost of query-execution time (performance). There are a few modeling scenarios where you almost have to leverage them but be aware that if you are dealing with large amounts of data this could be a huge performance implication at query-time.
Control of the Client Application (MDX): That is the Question
A lesser discussed matter yet still very important is how much control you have over the MDX issued to your cubes. For example, Excel Pivot Tables and other analytical tools allow the user to explore your cubes with freedom pending the security (no Perspectives are not a security measure). If you can write (or control) the MDX being issued by the end-user then obviously you have more control to ensure that actual MDX is optimized.
Got 64-Bit? Multi-Cores? For enterprise-class Microsoft DW/BI engagements forget about x86/single-core, period. Analysis Services can chew through (process) more data, in higher-degrees of parallelization with x64 multi-core CPUs. Storage|Formula engine cache rely on memory...long-story short, Analysis Services has been designed to perform at higher levels of scalability when running on multi-core/x64 CPUs. Also, be sure to set Analysis Service's Min/Max Thread settings properly for both Query & Processing.
Conclusion I am dedicated to life-long learning. I cannot take full credit for my content above as much of this knowledge was the work of others as well as my own testing. The resources section listed below gives credit where it is due accordingly. Take the time to learn and implement highly-performant SSAS 2005 cubes to ensure your project's stakeholder’s first query is a performant one!
Resources Microsoft SQL Server 2005 Analysis Services (best SSAS 2005 OLAP internals book out!) by SAMS Publishing: http://safari.samspublishing.com/0672327821
SQL Server Analysis Services 2005 Performance Tuning Whitepaper (a great tuning document): download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc
HP Solutions with Microsoft SQL Server: http://h71028.www7.hp.com/enterprise/cache/3887-0-0-0-121.html
A recent project has forced me (which is a good thing) to learn both the internals of SSAS 2005 as well as various performance tuning techniques to get maximum performance out of the OLAP server. It goes without saying that the grain of both your underlying data warehouse's Dimensions & Facts will drive how large your cubes are (total cube space). It also should be a given that Processing Time & Query (MDX) Execution Time usually compete with one another. Given the same grain of a model, the more Grain Data, Indexing, and Aggreggations you process upfront should generally result in a more performant end-user experience (but not always). And while ETL & Cube Processing time is of importance, in the real-world it is the end-user experience (capability and performance) which drives the DW/BI solution's adoption!
Throw-out unused Attributes/Optimizing Attributes/Leverage Member Properties The more dimensional attributes you create infers a larger cubespace, which also means more potential aggregations and indexes. Take the time to review with your clients the proposed set of attributes and be sure they all provide value as a 1st class Dimension Attribute. Also, if you find attributes are used primarily for informative purposes only consider leveraging Member Properties instead of creating an entire Dimension Attribute. Also, the surrogate key for your dimensions almost never add business value, delete those attributes and assign the keyColumns of your dimension's grain member (ie it's lowest level) attribute to the surrogate key. If an attribute participates in a natural hierarchy but is not useful as a stand-alone attribute hierarchy you should disable it's hierarchy via the AttributeHierarchyEnabled setting. Finally, be aware that if you have a 'deep' dimension (ie like 19 million members) at its lowest grain, any additional attributes you add will incur much overhead as they have a much higher degree of cardinality.
Set Partition Slices The question of whether or not you must explicitly set a partition's SLICE property is clearly documented incorrectly in SQL Server 2005 Books Online (BOL). For all but the simplest partition designs you should consider setting the SLICE property to match the source property (ie the dataset definitions should match across both properties). For those who do not know, a partition's SLICE is useful for query execution purposes, the SLICE tells the formula|storage engine which partition(s) hold the data that is being requested of it. Please see resources section below for more information on this.
Optimizing Attribute Relationships Attribute relationships are the developer's mechanism to inform the OLAP server of the relation between attributes. Just like Fact Tables (measure groups) relate to dimension in various ways (Regular, Referenced, Fact, Many-to-Many), dimension attributes can relate to one another in various forms (One-to-One or One-to-Many).Also, you can set the RelationshipType to Flexible or Rigid. If you know your member's change over time (ie reclass), make sure to leave this setting as Flexible, otherwise set it to Rigid. Take the time to thoroughly review your attribute relationships and ensure that both represent their natural hierarchy and are optimized!
Scalable Measure Group Partitioning & Aggregation Strategies This is one of the better known techniques but it is still of utmost importance. Make sure to design your measure group's partitions to optimize their performance (both processing and query execution). If your system has a 'rolling window' requirement also account for this in your ETL design/framework. You should almost always partition your measure groups by the DATE_KEY and match the underlying relational data warehouse (RDW) partitioning scheme. The basics of this is your 'hot' (the current period) partition should be optimized for query-execution time via setting a different Aggregation Design as opposed to the 'colder' (older) partitions. Also, if your main storage device (ie SAN usually) cannot hold all of your required data, consider leveraging Remote Partitions to offload the extreme 'cold' partitions to slower storage.
Continuously Tune Aggregations Based On Usage Most documentation in the community clearly states the order of creating effective aggregations is to first leverage the Aggregation Design Wizard, enable the Query Log, and then re-tune the aggregations using the Usage-Based Tuning Wizard. What is not mentioned (near enough anyway) is to continuously retune your aggregations using a refreshed Query Log using the Usage-Based Tuning Wizard. By doing so you are ensuring your aggregations are periodically revised based up recent, real-world usage of your cubes.
Warming the Cache Another well known technique...by issuing real-world MDX queries onStartUp of the MSOLAP service your cube's cache will be pre-optimized.
Be Mindful of Many-to-Many Dimensions Performance Implications While M:M dimensions are a powerful feature of SSAS 2005, that power comes at the cost of query-execution time (performance). There are a few modeling scenarios where you almost have to leverage them but be aware that if you are dealing with large amounts of data this could be a huge performance implication at query-time.
Control of the Client Application (MDX): That is the Question
A lesser discussed matter yet still very important is how much control you have over the MDX issued to your cubes. For example, Excel Pivot Tables and other analytical tools allow the user to explore your cubes with freedom pending the security (no Perspectives are not a security measure). If you can write (or control) the MDX being issued by the end-user then obviously you have more control to ensure that actual MDX is optimized.
Got 64-Bit? Multi-Cores? For enterprise-class Microsoft DW/BI engagements forget about x86/single-core, period. Analysis Services can chew through (process) more data, in higher-degrees of parallelization with x64 multi-core CPUs. Storage|Formula engine cache rely on memory...long-story short, Analysis Services has been designed to perform at higher levels of scalability when running on multi-core/x64 CPUs. Also, be sure to set Analysis Service's Min/Max Thread settings properly for both Query & Processing.
Conclusion I am dedicated to life-long learning. I cannot take full credit for my content above as much of this knowledge was the work of others as well as my own testing. The resources section listed below gives credit where it is due accordingly. Take the time to learn and implement highly-performant SSAS 2005 cubes to ensure your project's stakeholder’s first query is a performant one!
Resources Microsoft SQL Server 2005 Analysis Services (best SSAS 2005 OLAP internals book out!) by SAMS Publishing: http://safari.samspublishing.com/0672327821
SQL Server Analysis Services 2005 Performance Tuning Whitepaper (a great tuning document): download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc
HP Solutions with Microsoft SQL Server: http://h71028.www7.hp.com/enterprise/cache/3887-0-0-0-121.html
Etiketter:
2008,
2012,
attribute,
bids,
cache,
cube,
many to many,
ms sql,
olap,
optimize,
partition,
performance,
relationship,
slices,
ssas,
tunning
KPI for Hospitality Business - Important Calculations (BI) for Hospitality
Key Performance Indicators (KPI) for Hospitality industry help remove the guesswork from managing the business by checking the numbers that tell what’s really happening.
There’s a business saying: ‘If you can’t measure it, you can’t manage it!’ Real, responsive management needs reliable and truthful figures on which decisions can be based. If there are problems, you can take corrective action quickly. If you are having success, you’ll know to do more of what you’re doing! Good figures also give you a wider understanding of your success – sometimes if it’s a quiet month (when your suppliers are telling you that ‘everyone’s quiet!’) you’ll see that some of your KPIs are actually improving (ex. sales per head).
KPIs in Hospitality industry can be categorized for functions like Reception, Housekeeping, Maintenance, Kitchen, Restaurant, Sales, Store, Purchasing, etc.
Staff KPI:- Wage Cost %: wage costs as a percentage of sales
- Total Labour Cost %: not just wages but also the other work cover insurance, retirement and superannuation charges and other taxes that apply on your payroll
- Total Labour Hours: how many hours worked in each section. This is useful to compare against sales to measure productivity
- Event Labour charge-out: Hotels usually charge-out service staff at a markup on the cost of the wages paid. Are you achieving a consistent mark-up?
- Labour turnover: number of new staff in any one week or month
- Average length of employment: another way to look at your success in keeping staff. Add up the total number of weeks all your people have worked for you and divide this by the total number of staff
- Average hourly pay: divide the total payroll by the number of hours worked by all staff
Kitchen Management KPI:- Food Cost %: measured by adding up food purchases for the week and measuring them against your food sales
- Total Food Costs: how much was total food bill? Sometimes a useful figure to show staff who think you are made of money
- Food Costs per head: see every week how much it costs to feed an average customer
- Kitchen Labour %: measure kitchen productivity by comparing kitchen labour against food sales
- Kitchen Labour hours: how many hours worked in this section? Compare against sales to measure productivity
- Stock value: food stock holding- It should be less than a week’s use, but can slip out if you are storing frozen food
- Main selling items: weekly sales from POS or dockets & know the best sellers and map these on the Menu Profitability
- Kitchen linen costs: cost of uniforms, aprons & tea-towels can be a shock! How many tea-towels are being used each day?
Front House Management KPI:- Total Sales Per Head: total sales divided by number of customers. This may vary between different times of the day
- Number of customers: simple! A good measure of popularity
- Food, Dessert, Beverage Sales per head: how much your menu appeals to your customers (do you have all the choices they want), & how well your staff are selling.
- Seating Efficiency: how well are tables being turned over while still offering high quality customer service
- Basket Analysis: how many items do lunch customers buy? What else do morning coffee drinkers order? Grab a pile of dockets and look for ordering patterns
- Linen costs: uniforms, aprons etc.
- Front of House Labour %: how many hours worked in this section? Compare against sales to measure productivity
- FOH Labour hours: how many hours worked in this section? Compare against sales to measure productivity
- Customer satisfaction: Feedback forms, complaints and other methods that are hard to quantify sometimes but worth making an attempt.
- Strike rate: if 500 people came to hotel last night & only 100 ate at the bistro, your ’strike rate’ would be 1 in 5, or 20%
- RevPASH Revenue per Available Seat Hour: take the total number of ’seat hours’ and divide total revenue for a period by this number
Bar & Restaurant Management KPI:- Sales per head: how much your beverage and wine appeals to your customers and how well your staff are selling
- Gross Profit on sales: difference between what you sold and what it cost you. The sales mix can influence this heavily
- Average Profit % on sales: useful to see if your sales are holding steady, although ultimately the actual Gross Profit (real money) will matter the most
- Stock value: It’s worth checking with your suppliers and seeing how much you can order ‘just in time’
- Stock turnover: how fast is your cellar stock selling?
- Carrying cost of stock: what is the cost of financing the stock?
- Sales / stock-take discrepancies: Alcohol is security problem, & keeping an eye on ’shrinkage’, staff drinks and stealing a constant problem
Banquet Sales Management KPI:- Number of customers: simple! A good measure of popularity.
- Visits by your top 100 or 200 customers: they provide a huge proportion of your sales! Track their frequency and spending – these people are gold!
- Sales per head: across all areas
- Marketing and advertising costs: total value of spend, always trying to measure it against response
- Response rates: how many people responded to different campaigns and what effect did this have on profit?
- Press mentions: keeping your eyes open for favourable mentions
- Bookings: in the current week and month and coming up. Also in peak times, eg Christmas.
- Event inquiries: No. of inquiries about large bookings & functions, especially if a campaign to promote them is on
- Sales inquiry conversion rate: No. of inquiries that turn into actual sales. why so few people were ‘converted’ – was it the quality of the promotional material, skill of the sales staff, pricing or make-up of your function menus and facilities?
Finance & Admin Management KPI:- Cash position at bank: how much do you have available after reconciling your cheque book?
- Stock-take discrepancies: measure of efficiency of each department, but also of administrative systems in place
- Total accounts due: how much do you owe?
- Total accounts payable: needs careful management if you have accounts, eg large restaurants
- Return on Investment: profit business makes can be measured as a percentage return on the amount invested in it
- Taxes owed: to know how much is owed at any one time so it is not ’spent’
- Sales & costs: actual figures compared to what budgeted for a period
- Administration labour costs: strong and skilful administrative support will be essential to manage the KPIs listed above!
- IT efficiency: how much down-time for IT systems? How accurate is the POS system?
Other KPIs:- Revenue per available room
- Average daily rate of rooms
- % of occupancy of rooms
- Average cleaning costs per room
- % of reservation requests cancelled with / without penalty
- % of rooms with maintenance issues
- % of cancelled reservation requests
- Average number of guests per room
- Average length of stay of guests
- % of non-room revenue
- % of cancelled rooms occupied
- Kilowatt-hours (kwh) per room
- Number of hotel guests per employee
- Gross operating profits per available room
- % of guests who would rank stay as exceeding expectations
- Waste per night per occupied bed space
Provided by: Maia Intelligence and posted by: Besim Ismaili
Etiketter:
arr,
bar,
business,
finance,
hospitality,
hotel,
hotels,
house,
intelligence,
kpi,
occupancy,
restaurants,
revenue,
revpar,
service
Data mining - SSAS and DMX
Computers can be programmed to sort through enormous amounts of data
looking for patterns. It’s an exciting new frontier that goes by many different
names — in business, the most common ones are data mining, predictive
analytics, and machine learning — but this book sticks to “data mining”.
The Microsoft data-mining algorithms are part of SQL Server Analysis
Services, but you don’t have to be a super computer ninja to access and use
them. Microsoft offers a free Excel Data Mining Add-In that transforms Excel
into a simple, intuitive client program for the SSAS data-mining algorithms
looking for patterns. It’s an exciting new frontier that goes by many different
names — in business, the most common ones are data mining, predictive
analytics, and machine learning — but this book sticks to “data mining”.
The Microsoft data-mining algorithms are part of SQL Server Analysis
Services, but you don’t have to be a super computer ninja to access and use
them. Microsoft offers a free Excel Data Mining Add-In that transforms Excel
into a simple, intuitive client program for the SSAS data-mining algorithms
Analyzing data - SSAS
As you can imagine, the amount of data contained in a modern business is
enormous. If the data were very small, you could simply use Microsoft Excel
and perform all of the ad-hoc analysis you need with a Pivot Table. However,
when the rows of data reach into the billions, Excel is not capable of handling
the analysis on its own. For these massive databases, a concept called OnLine
Analytical Process (OLAP) is required. Microsoft’s implementation of OLAP is
called SQL Server Analysis Services (SSAS), which I cover in detail in Chapter 8.
If you’ve used Excel Pivot Tables before, think of OLAP as essentially a massive
Pivot Table with hundreds of possible pivot points and billions of rows
of data. A Pivot Table allows you to re-order and sum your data based on different
criteria. For example, you may want to see your sales broken down by
region, product, and sales rep one minute and then quickly re-order the groupings
to include product category, state, and store.
In Excel 2010 there is a new featured called PowerPivot that brings OLAP to
your desktop. PowerPivot allows you to pull in millions of rows of data and
work with it just like you would a smaller set of data. After you get your Excel
sheet how you want it, you can upload it to a SharePoint 2010 site and share
it with the rest of your organization.
With PowerPivot you are building your own Cubes right on your desktop using
Excel. If you use PowerPivot, you can brag to your friends and family that you
are an OLAP developer. Just don’t tell them you are simply using Excel and
Microsoft did some magic under the covers.
When you need a predefined and structured Cube that is already built for
you, then you turn to your IT department.
enormous. If the data were very small, you could simply use Microsoft Excel
and perform all of the ad-hoc analysis you need with a Pivot Table. However,
when the rows of data reach into the billions, Excel is not capable of handling
the analysis on its own. For these massive databases, a concept called OnLine
Analytical Process (OLAP) is required. Microsoft’s implementation of OLAP is
called SQL Server Analysis Services (SSAS), which I cover in detail in Chapter 8.
If you’ve used Excel Pivot Tables before, think of OLAP as essentially a massive
Pivot Table with hundreds of possible pivot points and billions of rows
of data. A Pivot Table allows you to re-order and sum your data based on different
criteria. For example, you may want to see your sales broken down by
region, product, and sales rep one minute and then quickly re-order the groupings
to include product category, state, and store.
In Excel 2010 there is a new featured called PowerPivot that brings OLAP to
your desktop. PowerPivot allows you to pull in millions of rows of data and
work with it just like you would a smaller set of data. After you get your Excel
sheet how you want it, you can upload it to a SharePoint 2010 site and share
it with the rest of your organization.
With PowerPivot you are building your own Cubes right on your desktop using
Excel. If you use PowerPivot, you can brag to your friends and family that you
are an OLAP developer. Just don’t tell them you are simply using Excel and
Microsoft did some magic under the covers.
When you need a predefined and structured Cube that is already built for
you, then you turn to your IT department.
Integrating data from many sources - SSIS
The many different systems and processes that make up an organization
create data in all shapes and forms. This data usually ends up stored in the
individual systems that generated it — but without any standard format.
Fortunately, SQL Server has a component — SQL Server Integration Services
(SSIS) — that can connect to these many different data sources and pull
the data back into the central data warehouse. As the data moves from the
source systems to the Data Warehouse, SSIS can also transform it into a standard
useful format. The whole process is known as Extract, Transform, and
Load (ETL).
create data in all shapes and forms. This data usually ends up stored in the
individual systems that generated it — but without any standard format.
Fortunately, SQL Server has a component — SQL Server Integration Services
(SSIS) — that can connect to these many different data sources and pull
the data back into the central data warehouse. As the data moves from the
source systems to the Data Warehouse, SSIS can also transform it into a standard
useful format. The whole process is known as Extract, Transform, and
Load (ETL).
Reporting on data - SSRS
When you have a Data Warehouse, you likely don’t want to look at rows
and rows of data; instead, you want to visualize the data and give it meaning.
Building reports that answer a particular question (or set of questions)
means taking raw data and turning it into information that can be used to
make intelligent business decisions. SQL Server Reporting Services (SSRS), a component of SQL Server — builds reports by doing that bit of magic.
SSRS has features that can make your reports as fancy as you like — gauges,
charts, graphs, aggregates, and many other snazzy ways to visualize the data
and rows of data; instead, you want to visualize the data and give it meaning.
Building reports that answer a particular question (or set of questions)
means taking raw data and turning it into information that can be used to
make intelligent business decisions. SQL Server Reporting Services (SSRS), a component of SQL Server — builds reports by doing that bit of magic.
SSRS has features that can make your reports as fancy as you like — gauges,
charts, graphs, aggregates, and many other snazzy ways to visualize the data
Data warehousing and data marts
Although computer systems help solve many problems in business, they use
so many different kinds of programs that they can’t always communicate
easily with each other. A tremendous number of systems make up a modern
organization — payroll, accounting, expenses, time, inventory, sales, customer
relations, software licensing, and so on. Many of these systems have
their own databases and ways of storing data. Combining data from the
tangle of systems — let alone doing something useful with the combined
data — becomes extremely difficult.
Business intelligence creates a “big picture” by storing and organizing data
from many disparate systems in one usable format. The idea is to make the
data readily accessible for reporting, analysis, and planning. A data warehouse
is a central database created for just that purpose: making the data
from all those sources useful and accessible for the organization. The idea is
to give decision-makers the information they need for making critical business
decisions.
A data mart is a more specialized tool with a similar purpose; it’s a functional
database that pulls particular information out of the overall Data Warehouse
(or even directly from source systems depending on who you ask) to answer
specific queries. For example, a manufacturing location may need to compile
some specialized data unique to the process used to make a particular product.
The overall data warehouse is too big and complex do that job (or to modify
effectively to handle it), so a smaller version — in BI lingo, a data mart — can be
created for this one manufacturing location.
The Microsoft SQL Server Database Engine manages not only data warehouses,
but also data marts — and both types of data storage can become
massive. Fortunately, SQL Server addresses this problem by storing one
database across a cluster of many different servers. This approach accommodates
the enterprise as it grows in scale.
so many different kinds of programs that they can’t always communicate
easily with each other. A tremendous number of systems make up a modern
organization — payroll, accounting, expenses, time, inventory, sales, customer
relations, software licensing, and so on. Many of these systems have
their own databases and ways of storing data. Combining data from the
tangle of systems — let alone doing something useful with the combined
data — becomes extremely difficult.
Business intelligence creates a “big picture” by storing and organizing data
from many disparate systems in one usable format. The idea is to make the
data readily accessible for reporting, analysis, and planning. A data warehouse
is a central database created for just that purpose: making the data
from all those sources useful and accessible for the organization. The idea is
to give decision-makers the information they need for making critical business
decisions.
A data mart is a more specialized tool with a similar purpose; it’s a functional
database that pulls particular information out of the overall Data Warehouse
(or even directly from source systems depending on who you ask) to answer
specific queries. For example, a manufacturing location may need to compile
some specialized data unique to the process used to make a particular product.
The overall data warehouse is too big and complex do that job (or to modify
effectively to handle it), so a smaller version — in BI lingo, a data mart — can be
created for this one manufacturing location.
The Microsoft SQL Server Database Engine manages not only data warehouses,
but also data marts — and both types of data storage can become
massive. Fortunately, SQL Server addresses this problem by storing one
database across a cluster of many different servers. This approach accommodates
the enterprise as it grows in scale.
fredag 28. september 2012
Information management concepts
Following the behavioral science theory of management, mainly developed at Carnegie Mellon University and prominently represented by Barnard, Richard M. Cyert, March and Simon, most of what goes on in service organizations is actually decision making and information processes. The crucial factor in the information and decision process analysis is thus individuals’ limited ability to process information and to make decisions under these limitations.
According to March and Simon [1], organizations have to be considered as cooperative systems with a high level of information processing and a vast need for decision making at various levels. They also claimed that there are factors that would prevent individuals from acting strictly rational, in opposite to what has been proposed and advocated by classic theorists
Instead of using the model of the economic man, as advocated in classic theory, they proposed the administrative man as an alternative based on their argumentation about the cognitive limits of rationality.
While the theories developed at Carnegie Mellon clearly filled some theoretical gaps in the discipline, March and Simon [1] did not propose a certain organizational form that they considered especially feasible for coping with cognitive limitations and bounded rationality of decision-makers. Through their own argumentation against normative decision-making models, i.e., models that prescribe people how they ought to choose, they also abandoned the idea of an ideal organizational form.
In addition to the factors mentioned by March and Simon, there are two other considerable aspects, stemming from environmental and organizational dynamics. Firstly, it is not possible to access, collect and evaluate all environmental information being relevant for taking a certain decision at a reasonable price, i.e., time and effort [2]. In other words, following a national economic framework, the transaction cost associated with the information process is too high. Secondly, established organizational rules and procedures can prevent the taking of the most appropriate decision, i.e., that a sub-optimum solution is chosen in accordance to organizational rank structure or institutional rules, guidelines and procedures [3] [4], an issue that also has been brought forward as a major critique against the principles of bureaucratic organizations.[5]
According to the Carnegie Mellon School and its followers, information management, i.e., the organization's ability to process information, is at the core of organizational and managerial competencies. Consequently, strategies for organization design must be aiming at improved information processing capability. Jay Galbraith [6] has identified five main organization design strategies within two categories — increased information processing capacity and reduced need for information processing.
1.Reduction of information processing needs
1.Environmental management
2.Creation of slack resources
3.Creation of self-contained tasks
2.Increasing the organizational information processing capacity
1.Creation of lateral relations
2.Vertical information systems
Environmental management. Instead of adapting to changing environmental circumstances, the organization can seek to modify its environment. Vertical and horizontal collaboration, i.e. cooperation or integration with other organizations in the industry value system are typical means of reducing uncertainty. An example of reducing uncertainty in relation to the prior or demanding stage of the industry system is the concept of Supplier-Retailer collaboration or Efficient Customer Response.
Creation of slack resources. In order to reduce exceptions, performance levels can be reduced, thus decreasing the information load on the hierarchy. These additional slack resources, required to reduce information processing in the hierarchy, represent an additional cost to the organization. The choice of this method clearly depends on the alternative costs of other strategies.
Creation of self-contained tasks. Achieving a conceptual closure of tasks is another way of reducing information processing. In this case, the task-performing unit has all the resources required to perform the task. This approach is concerned with task (de-)composition and interaction between different organizational units, i.e. organizational and information interfaces.
Creation of lateral relations. In this case, lateral decision processes are established that cut across functional organizational units. The aim is to apply a system of decision subsidiarity, i.e. to move decision power to the process, instead of moving information from the process into the hierarchy for decision-making.
Investment in vertical information systems. Instead of processing information through the existing hierarchical channels, the organization can establish vertical information systems. In this case, the information flow for a specific task (or set of tasks) is routed in accordance to the applied business logic, rather than the hierarchical organization.
Following the lateral relations concept, it also becomes possible to employ an organizational form that is different from the simple hierarchical information. The Matrix organization is aiming at bringing together the functional and product departmental bases and achieving a balance in information processing and decision making between the vertical (hierarchical) and the horizontal (product or project) structure. The creation of a matrix organization can also be considered as management's response to a persistent or permanent demand for adaptation to environmental dynamics, instead of the response to episodic demands.
Source: Wikipedia
According to March and Simon [1], organizations have to be considered as cooperative systems with a high level of information processing and a vast need for decision making at various levels. They also claimed that there are factors that would prevent individuals from acting strictly rational, in opposite to what has been proposed and advocated by classic theorists
Instead of using the model of the economic man, as advocated in classic theory, they proposed the administrative man as an alternative based on their argumentation about the cognitive limits of rationality.
While the theories developed at Carnegie Mellon clearly filled some theoretical gaps in the discipline, March and Simon [1] did not propose a certain organizational form that they considered especially feasible for coping with cognitive limitations and bounded rationality of decision-makers. Through their own argumentation against normative decision-making models, i.e., models that prescribe people how they ought to choose, they also abandoned the idea of an ideal organizational form.
In addition to the factors mentioned by March and Simon, there are two other considerable aspects, stemming from environmental and organizational dynamics. Firstly, it is not possible to access, collect and evaluate all environmental information being relevant for taking a certain decision at a reasonable price, i.e., time and effort [2]. In other words, following a national economic framework, the transaction cost associated with the information process is too high. Secondly, established organizational rules and procedures can prevent the taking of the most appropriate decision, i.e., that a sub-optimum solution is chosen in accordance to organizational rank structure or institutional rules, guidelines and procedures [3] [4], an issue that also has been brought forward as a major critique against the principles of bureaucratic organizations.[5]
According to the Carnegie Mellon School and its followers, information management, i.e., the organization's ability to process information, is at the core of organizational and managerial competencies. Consequently, strategies for organization design must be aiming at improved information processing capability. Jay Galbraith [6] has identified five main organization design strategies within two categories — increased information processing capacity and reduced need for information processing.
1.Reduction of information processing needs
1.Environmental management
2.Creation of slack resources
3.Creation of self-contained tasks
2.Increasing the organizational information processing capacity
1.Creation of lateral relations
2.Vertical information systems
Environmental management. Instead of adapting to changing environmental circumstances, the organization can seek to modify its environment. Vertical and horizontal collaboration, i.e. cooperation or integration with other organizations in the industry value system are typical means of reducing uncertainty. An example of reducing uncertainty in relation to the prior or demanding stage of the industry system is the concept of Supplier-Retailer collaboration or Efficient Customer Response.
Creation of slack resources. In order to reduce exceptions, performance levels can be reduced, thus decreasing the information load on the hierarchy. These additional slack resources, required to reduce information processing in the hierarchy, represent an additional cost to the organization. The choice of this method clearly depends on the alternative costs of other strategies.
Creation of self-contained tasks. Achieving a conceptual closure of tasks is another way of reducing information processing. In this case, the task-performing unit has all the resources required to perform the task. This approach is concerned with task (de-)composition and interaction between different organizational units, i.e. organizational and information interfaces.
Creation of lateral relations. In this case, lateral decision processes are established that cut across functional organizational units. The aim is to apply a system of decision subsidiarity, i.e. to move decision power to the process, instead of moving information from the process into the hierarchy for decision-making.
Investment in vertical information systems. Instead of processing information through the existing hierarchical channels, the organization can establish vertical information systems. In this case, the information flow for a specific task (or set of tasks) is routed in accordance to the applied business logic, rather than the hierarchical organization.
Following the lateral relations concept, it also becomes possible to employ an organizational form that is different from the simple hierarchical information. The Matrix organization is aiming at bringing together the functional and product departmental bases and achieving a balance in information processing and decision making between the vertical (hierarchical) and the horizontal (product or project) structure. The creation of a matrix organization can also be considered as management's response to a persistent or permanent demand for adaptation to environmental dynamics, instead of the response to episodic demands.
Source: Wikipedia
onsdag 26. september 2012
Important resources for Microsoft BI
Microsoft Business Intelligence Podcast link:
http://www.microsoft.com/bi/resourcecenter/podcasts.aspx
http://www.microsoft.com/bi/resourcecenter/podcasts.aspx
Microsoft Business Intelligence Virtual Labs link:
http://www.microsoft.com/bi/resourcecenter/virtual-labs.aspx
Microsoft Business Intelligence White Papers link:
http://www.microsoft.com/bi/resourcecenter/whitepapers.aspx
Etiketter:
Business Intelligence,
labs,
learning,
Microsoft,
resources,
virtual,
white papers
Welcome to my BI Blog
This Blog will contain information about Business Intelligence, Data Mining, Data Modeling and MDX including tutorials, white papers, Important updates etc..
Our intention is open a discussion blog where experts can talk generally about Business Intelligence or can exchange views for particular problems that they experienced. We are going to talk about different platforms their advantages and disadvantages, Microsoft Platform, MS SQL 2005 or 2008 BID, Informatica, Oracle BI Solution etc...
MDX and DMX will be part of this blog too. Advanced calculations that we can handle with MDX and problems for imporoving time in reporting large datawarehouse calculations over dimensions.
Dimensional databases vs relational, OLAP Cube, algorithms for time improvment will rich our Blog.
You will be updated with podcast, white papers, analysis and links that are importants to our auditorium.
Etiketter:
algorithm,
analysis,
analytic,
analytics,
Business Intelligence,
cognos,
Data Mining,
imb,
mdx,
olap,
sas,
ssas
Abonner på:
Innlegg (Atom)