300+ [UPDATED] Data Mining Interview Questions

  • 1. What Is Data Mining?

    Data mining is a process of extracting hidden trends within a datawarehouse. For example an insurance dataware house can be used to mine data for the most high risk people to insure in a certain geographial area.

  • 2. Differentiate Between Data Mining And Data Warehousing?

    Data warehousing is merely extracting data from different sources, cleaning the data and storing it in the warehouse. Where as data mining aims to examine or explore the data using queries. These queries can be fired on the data warehouse. Explore the data in data mining helps in reporting, planning strategies, finding meaningful patterns etc.

    E.g. a data warehouse of a company stores all the relevant information of projects and employees. Using Data mining, one can use this data to generate different reports like profits generated etc.

  • Data Center Management Interview Questions

  • 3. What Is Data Purging?

    The process of cleaning junk data is termed as data purging. Purging data would mean getting rid of unnecessary NULL values of columns. This usually happens when the size of the database gets too large.

  • 4. What Are Cubes?

    A data cube stores data in a summarized version which helps in a faster analysis of data. The data is stored in such a way that it allows reporting easily.

    E.g. using a data cube A user may want to analyze weekly, monthly performance of an employee. Here, month and week could be considered as the dimensions of the cube.

  • Data Mining Tutorial

  • 5. What Are Olap And Oltp?

    An IT system can be divided into Analytical Process and Transactional Process.

    OLTP – categorized by short online transactions. The emphasis is query processing, maintaining data integration in multi-access environment.

    OLAP – Low volumes of transactions are categorized by OLAP. Queries involve aggregation and very complex. Response time is an effectiveness measure and used widely in data mining techniques.

  • Clinical SAS Interview Questions

  • 6. What Are The Different Problems That “data Mining” Can Solve?

    • Data mining helps analysts in making faster business decisions which increases revenue with lower costs.
    • Data mining helps to understand, explore and identify patterns of data.
    • Data mining automates process of finding predictive information in large databases.
    • Helps to identify previously hidden patterns.

  • 7. What Are Different Stages Of “data Mining”?

    Exploration:
     This stage involves preparation and collection of data. it also involves data cleaning, transformation. Based on size of data, different tools to analyze the data may be required. This stage helps to determine different variables of the data to determine their behavior.

    Model building and validation:
     This stage involves choosing the best model based on their predictive performance. The model is then applied on the different data sets and compared for best performance. This stage is also called as pattern identification. This stage is a little complex because it involves choosing the best pattern to allow easy predictions.

    Deployment:
     Based on model selected in previous stage, it is applied to the data sets. This is to generate predictions or estimates of the expected outcome.

  • R Programming language Tutorial Machine learning Interview Questions

  • 8. What Is Discrete And Continuous Data In Data Mining World?

    Discreet data can be considered as defined or finite data. E.g. Mobile numbers, gender. Continuous data can be considered as data which changes continuously and in an ordered fashion. E.g. age.

  • 9. What Is Model In Data Mining World?

    Models in Data mining help the different algorithms in decision making or pattern matching. The second stage of data mining involves considering various models and choosing the best one based on their predictive performance.

  • Data analyst Interview Questions

  • 10. How Does The Data Mining And Data Warehousing Work Together?

    Data warehousing can be used for analyzing the business needs by storing data in a meaningful form. Using Data mining, one can forecast the business needs. Data warehouse can act as a source of this forecasting.

  • 11. What Is A Decision Tree Algorithm?

    A decision tree is a tree in which every node is either a leaf node or a decision node. This tree takes an input an object and outputs some decision. All Paths from root node to the leaf node are reached by either using AND or OR or BOTH. The tree is constructed using the regularities of the data. The decision tree is not affected by Automatic Data Preparation.

  • R Programming language Interview Questions

  • 12. What Is Naive Bayes Algorithm?

    Naive Bayes Algorithm is used to generate mining models. These models help to identify relationships between input columns and the predictable columns. This algorithm can be used in the initial stage of exploration. The algorithm calculates the probability of every state of each input column given predictable columns possible states. After the model is made, the results can be used for exploration and making predictions.

  • Data Center Management Interview Questions

  • 13. Explain Clustering Algorithm?

    Clustering algorithm is used to group sets of data with similar characteristics also called as clusters. These clusters help in making faster decisions, and exploring data. The algorithm first identifies relationships in a dataset following which it generates a series of clusters based on the relationships. The process of creating clusters is iterative. The algorithm redefines the groupings to create clusters that better represent the data.

  • 14. What Is Time Series Algorithm In Data Mining?

    Time series algorithm can be used to predict continuous values of data. Once the algorithm is skilled to predict a series of data, it can predict the outcome of other series. The algorithm generates a model that can predict trends based only on the original dataset. New data can also be added that automatically becomes a part of the trend analysis.

    E.g. Performance one employee can influence or forecast the profit.

  • 15. Explain Association Algorithm In Data Mining?

    Association algorithm is used for recommendation engine that is based on a market based analysis. This engine suggests products to customers based on what they bought earlier. The model is built on a dataset containing identifiers. These identifiers are both for individual cases and for the items that cases contain. These groups of items in a data set are called as an item set. The algorithm traverses a data set to find items that appear in a case. MINIMUM_SUPPORT parameter is used any associated items that appear into an item set.

  • Advanced SAS Interview Questions

  • 16. What Is Sequence Clustering Algorithm?

    Sequence clustering algorithm collects similar or related paths, sequences of data containing events. The data represents a series of events or transitions between states in a dataset like a series of web clicks. The algorithm will examine all probabilities of transitions and measure the differences, or distances, between all the possible sequences in the data set. This helps it to determine which sequence can be the best for input for clustering.

    E.g. Sequence clustering algorithm may help finding the path to store a product of “similar” nature in a retail ware house.

  • 17. Explain The Concepts And Capabilities Of Data Mining?

    Data mining is used to examine or explore the data using queries. These queries can be fired on the data warehouse. Explore the data in data mining helps in reporting, planning strategies, finding meaningful patterns etc. it is more commonly used to transform large amount of data into a meaningful form. Data here can be facts, numbers or any real time information like sales figures, cost, meta data etc. Information would be the patterns and the relationships amongst the data that can provide information.

  • Data Center Technician Interview Questions

  • 18. Explain How To Work With The Data Mining Algorithms Included In Sql Server Data Mining?

    SQL Server data mining offers Data Mining Add-ins for office 2007 that allows discovering the patterns and relationships of the data. This also helps in an enhanced analysis. The Add-in called as Data Mining client for Excel is used to first prepare data, build, evaluate, manage and predict results.

  • Clinical SAS Interview Questions

  • 19. Explain How To Use Dmx-the Data Mining Query Language.

    Data mining extension is based on the syntax of SQL. It is based on relational concepts and mainly used to create and manage the data mining models. DMX comprises of two types of statements: Data definition and Data manipulation. Data definition is used to define or create new models, structures.

    Example:
    CREATE MINING SRUCTURE
    CREATE MINING MODEL
    Data manipulation is used to manage the existing models and structures.
    Example:
    INSERT INTO
    SELECT FROM .CONTENT (DMX)

  • 20. Explain How To Mine An Olap Cube?

    A data mining extension can be used to slice the data the source cube in the order as discovered by data mining. When a cube is mined the case table is a dimension.

  • Data Analysis Expressions (DAX) Interview Questions

  • 21. What Are The Different Ways Of Moving Data/databases Between Servers And Databases In Sql Server?

    There are several ways of doing this. One can use any of the following options:
    – BACKUP/RESTORE,
    – Dettaching/attaching databases,
    – Replication,
    – DTS,
    – BCP,
    – logshipping,
    – INSERT…SELECT,
    – SELECT…INTO,
    – creating INSERT scripts to generate data.

  • 22. What Are The Benefits Of User-defined Functions?

    a. Can be used in a number of places without restrictions as compared to stored procedures.
    b. Code can be made less complex and easier to write.
    c. Parameters can be passed to the function.
    d. They can be used to create joins and also be sued in a select, where or case statement.
    e. Simpler to invoke.

  • 23. Define Pre Pruning?

    A tree is pruned by halting its construction early. Upon halting, the node becomes a leaf. The leaf may hold the most frequent class among the subset samples.

  • Predictive Modeling Interview Questions

  • 24. What Are Interval Scaled Variables?

    Interval scaled variables are continuous measurements of linear scale. For example, height and weight, weather temperature or coordinates for any cluster. These measurements can be calculated using Euclidean distance or Minkowski distance.

  • Machine learning Interview Questions

  • 25. What Is A Sting?

    Statistical Information Grid is called as STING; it is a grid based multi resolution clustering method. In STING method, all the objects are contained into rectangular cells, these cells are kept into various levels of resolutions and these levels are arranged in a hierarchical structure.

  • 26. What Is A Dbscan?

    Density Based Spatial Clustering of Application Noise is called as DBSCAN. DBSCAN is a density based clustering method that converts the high-density objects regions into clusters with arbitrary shapes and sizes. DBSCAN defines the cluster as a maximal set of density connected points.

  • 27. Define Density Based Method?

    Density based method deals with arbitrary shaped clusters. In density-based method, clusters are formed on the basis of the region where the density of the objects is high.

  • Data analyst Interview Questions

  • 28. Define Chameleon Method?

    Chameleon is another hierarchical clustering method that uses dynamic modeling. Chameleon is introduced to recover the drawbacks of CURE method. In this method two clusters are merged, if the interconnectivity between two clusters is greater than the interconnectivity between the objects within a cluster.

  • 29. What Do U Mean By Partitioning Method?

    In partitioning method a partitioning algorithm arranges all the objects into various partitions, where the total number of partitions is less than the total number of objects. Here each partition represents a cluster. The two types of partitioning method are k-means and k-medoids.

  • 30. Define Genetic Algorithm?

    Enables us to locate optimal binary string by processing an initial random population of binary strings by performing operations such as artificial mutation , crossover and selection.

  • 31. What Is Ods?

    1. ODS means Operational Data Store.
    2. A collection of operation or bases data that is extracted from operation databases and standardized, cleansed, consolidated, transformed, and loaded into an enterprise data architecture. An ODS is used to support data mining of operational data, or as the store for base data that is summarized for a data warehouse. The ODS may also be used to audit the data warehouse to assure summarized and derived data is calculated properly. The ODS may further become the enterprise shared operational database, allowing operational systems that are being reengineered to use the ODS as there operation databases.

  • 32. What Is Spatial Data Mining?

    Spatial data mining is the application of data mining methods to spatial data. Spatial data mining follows along the same functions in data mining, with the end objective to find patterns in geography. So far, data mining and Geographic Information Systems (GIS) have existed as two separate technologies, each with its own methods, traditions and approaches to visualization and data analysis. Particularly, most contemporary GIS have only very basic spatial analysis functionality. The immense explosion in geographically referenced data occasioned by developments in IT, digital mapping, remote sensing, and the global diffusion of GIS emphasises the importance of developing data driven inductive approaches to geographical analysis and modeling.

    Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision-making. Recently, the task of integrating these two technologies has become critical, especially as various public and private sector organizations possessing huge databases with thematic and geographically referenced data begin to realise the huge potential of the information hidden there. Among those organizations are:

    * offices requiring analysis or dissemination of geo-referenced statistical data
    * public health services searching for explanations of disease clusters
    * environmental agencies assessing the impact of changing land-use patterns on climate change
    * geo-marketin
    g companies doing customer segmentation based on spatial location.

  • 33. What Is Smoothing?

    Smoothing is an approach that is used to remove the nonsystematic behaviors found in time series. It usually takes the form of finding moving averages of attribute values. It is used to filter out noise and outliers.

  • R Programming language Interview Questions

  • 34. What Are The Advantages Data Mining Over Traditional Approaches?

    Data Mining is used for the estimation of future. For example if we take a company/business organization by using the concept of Data Mining we can predict the future of business interms of Revenue (or) Employees (or) Cutomers (or) Orders etc.

    Traditional approches use simple algorithms for estimating the future. But it does not give accurate results when compared to Data Mining.

  • 35. What Is Model Based Method?

    For optimizing a fit between a given data set and a mathematical model based methods are used. This method uses an assumption that the data are distributed by probability distributions. There are two basic approaches in this method that are
    1. Statistical Approach
    2. Neural Network Approach.

  • 36. What Is An Index?

    Indexes of SQL Server are similar to the indexes in books. They help SQL Server retrieve the data quicker. Indexes are of two types. Clustered indexes and non-clustered indexes. Rows in the table are stored in the order of the clustered index key.
    There can be only one clustered index per table.
    Non-clustered indexes have their own storage separate from the table data storage.
    Non-clustered indexes are stored as B-tree structures.
    Leaf level nodes having the index key and it’s row locater.

  • Advanced SAS Interview Questions

  • 37. Mention Some Of The Data Mining Techniques?

    1. Statistics
    2. Machine learning
    3. Decision Tree
    4. Hidden markov models
    5. Artificial Intelligence
    6. Genetic Algorithm
    7. Meta learning
  • 38. Define Binary Variables? And What Are The Two Types Of Binary Variables?

    Binary variables are understood by two states 0 and 1, when state is 0, variable is absent and when state is 1, variable is present. There are two types of binary variables, symmetric and asymmetric binary variables. Symmetric variables are those variables that have same state values and weights. Asymmetric variables are those variables that have not same state values and weights.

  • 39. Explain The Issues Regarding Classification And Prediction?

    Answer :

    Preparing the data for classification and prediction:

    • Data cleaning
    •  Relevance analysis
    •  Data transformation
    • Comparing classification methods
    •  Predictive accuracy
    •  Speed
    •  Robustness
    •  Scalability
    •  Interpretability
  • 40. What Are Non-additive Facts?

    Non-Additive: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.

  • Data Center Technician Interview Questions

  • 41. What Is Meteorological Data?

    Meteorology is the interdisciplinary scientific study of the atmosphere. It observes the changes in temperature, air pressure, moisture and wind direction. Usually, temperature, pressure, wind measurements and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. There are many methods of collecting data and Radar, Lidar, satellites are some of them.

    Weather forecasts are made by collecting quantitative data about the current state of the atmosphere. The main issue arise in this prediction is, it involves high-dimensional characters. To overcome this issue, it is necessary to first analyze and simplify the data before proceeding with other analysis. Some data mining techniques are appropriate in this context.

  • 42. Define Descriptive Model?

    It is used to determine the patterns and relationships in a sample data.

    Data mining tasks that belongs to descriptive model:

    1. Clustering
    2. Summarization
    3. Association rules
    4. Sequence discovery
  • Data Analysis Expressions (DAX) Interview Questions

  • 43. What Is A Star Schema?

    Star schema is a type of organising the tables such that we can retrieve the result from the database easily and fastly in the warehouse environment.Usually a star schema consists of one or more dimension tables around a fact table which looks like a star,so that it got its name.

  • 44. What Are The Steps Involved In Kdd Process?

    1. Data cleaning
    2. Data Mining
    3. Pattern Evaluation
    4. Knowledge Presentation
    5. Data Integration
    6. Data Selection
    7. Data Transformation
  • 45. What Is A Lookup Table?

    A lookUp table is the one which is used when updating a warehouse. When the lookup is placed on the target table (fact table / warehouse) based upon the primary key of the target, it just updates the table by allowing only new records or updated records based on the lookup condition.

  • 46. What Is Attribute Selection Measure?

    The information Gain measure is used to select the test attribute at each node in the decision tree. Such a measure is referred to as an attribute selection measure or a measure of the goodness of split.

  • 47. Explain Statistical Perspective In Data Mining?

    •  Point estimation
    •  Data summarization
    •  Bayesian techniques
    •  Hypothesis testing
    •  Regression
    •  Correlation
  • 48. Define Wave Cluster?

    It is a grid based multi resolution clustering method. In this method all the objects are represented by a multidimensional grid structure and a wavelet transformation is applied for finding the dense region. Each grid cell contains the information of the group of objects that map into a cell. A wavelet transformation is a process of signaling that produces the signal of various frequency sub bands.

  • 49. What Is Time Series Analysis?

    A time series is a set of attribute values over a period of time. Time Series Analysis may be viewed as finding patterns in the data and predicting future values.

  • 50. Explain Mining Single ?dimensional Boolean Associated Rules From Transactional Databases?

    The apriori algorithm: Finding frequent itemsets using candidate generation Mining frequent item sets without candidate generation.

  • 51. What Is Meta Learning?

    Concept of combining the predictions made from multiple models of data mining and analyzing those predictions to formulate a new and previously unknown prediction.

  • 52. Describe Important Index Characteristics?

    The characteristics of the indexes are:
    * They fasten the searching of a row.
    * They are sorted by the Key values.
    * They are small and contain only a small number of columns of the table.
    * They refer for the appropriate block of the table with a key value.

  • 53. What Is The Use Of Regression?

    Regression can be used to solve the classification problems but it can also be used for applications such as forecasting. Regression can be performed using many different types of techniques; in actually regression takes a set of data and fits the data to a formula.

  • 54. What Is Dimensional Modelling? Why Is It Important ?

    Dimensional Modelling is a design concept used by many data warehouse desginers to build thier data warehouse. In this design model all the data is stored in two types of tables – Facts table and Dimension table. Fact table contains the facts/measurements of the business and the dimension table contains the context of measuremnets ie, the dimensions on which the facts are calculated.

  • 55. What Is Unique Index?

    Unique index is the index that is applied to any column of unique value.
    A unique index can also be applied to a group of columns.

  • 56. What Are The Foundations Of Data Mining?

    Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery. Data mining is ready for application in the business community because it is supported by three technologies that are now sufficiently mature:
    * Massive data collection
    * Powerful multiprocessor computers
    * Data mining algorithms

    Commercial databases are growing at unprecedented rates. A recent META Group survey of data warehouse projects found that 19% of respondents are beyond the 50 gigabyte level, while 59% expect to be there by second quarter of 1996.1 In some industries, such as retail, these numbers can be much larger. The accompanying need for improved computational engines can now be met in a cost-effective manner with parallel multiprocessor computer technology. Data mining algorithms embody techniques that have existed for at least 10 years, but have only recently been implemented as mature, reliable, understandable tools that consistently outperform older statistical methods.

  • 57. What Snow Flake Schema?

    Snowflake Schema, each dimension has a primary dimension table, to which one or more additional dimensions can join. The primary dimension table is the only table that can join to the fact table.

  • 58. Differences Between Star And Snowflake Schemas?

    Star schema – all dimensions will be linked directly with a fat table.
    Snow schema – dimensions maybe interlinked or may have one-to-many relationship with other tables.

  • 59. What Is Hierarchical Method?

    Hierarchical method groups all the objects into a tree of clusters that are arranged in a hierarchical order. This method works on bottom-up or top-down approaches.

  • 60. What Is Cure?

    Clustering Using Representatives is called as CURE. The clustering algorithms generally work on spherical and similar size clusters. CURE overcomes the problem of spherical and similar size cluster and is more robust with respect to outliers.

  • 61. What Is Etl?

    ETL stands for extraction, transformation and loading.

    ETL provide developers with an interface for designing source-to-target mappings, ransformation and job control parameter.
    *Extraction
    Take data from an external source and move it to the warehouse pre-processor database.
    *Transformation
    Transform data task allows point-to-point generating, modifying and transforming data.
    *Loading
    Load data task adds records to a database table in a warehouse.

  • 62. Define Rollup And Cube?

    Custom rollup operators provide a simple way of controlling the process of rolling up a member to its parents values.The rollup uses the contents of the column as custom rollup operator for each member and is used to evaluate the value of the member’s parents.

    If a cube has multiple custom rollup formulas and custom rollup members, then the formulas are resolved in the order in which the dimensions have been added to the cube.

  • 63. What Are The Different Problems That “data Mining” Can Solve?

    *Data mining helps analysts in making faster business decisions which increases revenue with lower costs.

    *Data mining helps to understand, explore and identify patterns of data.

    *Data mining automates process of finding predictive information in large databases.

    *Helps to identify previously hidden patterns.

  • 64. What Are Different Stages Of “data Mining”?

    Exploration: This stage involves preparation and collection of data. it also involves data cleaning, transformation. Based on size of data, different tools to analyze the data may be required. This stage helps to determine different variables of the data to determine their behavior.

    Model building and validation: This stage involves choosing the best model based on their predictive performance. The model is then applied on the different data sets and compared for best performance. This stage is also called as pattern identification. This stage is a little complex because it involves choosing the best pattern to allow easy predictions.

    Deployment: Based on model selected in previous stage, it is applied to the data sets. This is to generate predictions or estimates of the expected outcome.

  • 65. Explain How To Use Dmx-the Data Mining Query Language?

    Data mining extension is based on the syntax of SQL. It is based on relational concepts and mainly used to create and manage the data mining models. DMX comprises of two types of statements: Data definition and Data manipulation. Data definition is used to define or create new models, structures.

    Example:
    CREATE MINING SRUCTURE
    CREATE MINING MODEL

    Data manipulation is used to manage the existing models and structures.

    Example:
    INSERT INTO
    SELECT FROM .CONTENT (DMX)