{"id":198,"date":"2018-01-08T14:10:00","date_gmt":"2018-01-08T14:10:00","guid":{"rendered":"https:\/\/kindsonthegenius.com\/blog\/2018\/01\/08\/intelligent-data-analysis-quetions-and-answers-summary2017\/"},"modified":"2018-01-08T14:10:00","modified_gmt":"2018-01-08T14:10:00","slug":"intelligent-data-analysis-quetions-and-answers-summary2017","status":"publish","type":"post","link":"https:\/\/kindsonthegenius.com\/blog\/intelligent-data-analysis-quetions-and-answers-summary2017\/","title":{"rendered":"Intelligent Data Analysis Quetions and Answers Summary(2017)"},"content":{"rendered":"<div style=\"color: #555555; font-size: 18px; line-height: 30px; text-align: justify;\">\n<div style=\"font-family: 'segoe ui';\"><span style=\"color: #cc0000;\"><b>1. PARADIGMS OF LEARNING<\/b><\/span><\/p>\n<hr \/>\n<p><b>Interpretation of Probability<\/b><br \/>Probability expresses uncertainty that an even may or may not occur and is a key concept in pattern recognition.<br \/>Lets assume two random variables <b>X<\/b> and <b>Y<\/b> such that <b>X<\/b> can take values x<sub>i<\/sub> where <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>i = 1,&#8230;, M<\/i><\/span> and <b>Y<\/b> can take values <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">y<sub>i<\/sub><\/span><\/i> where <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">i = 1,&#8230;,N.<\/span><\/i><br \/>Also let the total number of times X takes the value x<sub>i<\/sub> by c<sub>i<\/sub> and the total number of times Y takes the value <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">y<sub>i<\/sub><\/span><\/i> be <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>c<sub>j<\/sub>.<\/i><\/span><\/p>\n<p><span style=\"color: #cc0000;\"><i>Note the Following:<\/i><\/span><br \/><i>1. <\/i>The probability that X would take value xi and Y would take a value yi is written as: <br \/><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>p(X = x<sub>i<\/sub>, Y = y<sub>j<\/sub>) = n<sub>ij<\/sub>\/N<\/i><\/span><br \/>This is called the joint probability of <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>X = x<sub>i<\/sub><\/i><\/span> and <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">Y= y<sub>j<\/sub><\/span><\/i><\/p>\n<p>2. The probability that X would take value xi is given as<br \/><i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">P(X = x<sub>i<\/sub>) = c<sub>i<\/sub>\/N<\/span><\/i><\/p>\n<p><span style=\"color: #cc0000;\">Rules of Probability <\/span><br \/>The two rule of probability are the sum and the product rule given below:<\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"clear: both; text-align: center;\"><a href=\"https:\/\/4.bp.blogspot.com\/-5HJCsp9Co1o\/WlP7aZwx3_I\/AAAAAAAAAqA\/TkwS1jw1yj8vwtwFe3OzZ7IsV-rgk01SQCLcBGAs\/s1600\/Probability%2BRules.jpg\" style=\"margin-left: 1em; margin-right: 1em;\"><img decoding=\"async\" loading=\"lazy\" border=\"0\" data-original-height=\"330\" data-original-width=\"873\" height=\"150\" src=\"https:\/\/4.bp.blogspot.com\/-5HJCsp9Co1o\/WlP7aZwx3_I\/AAAAAAAAAqA\/TkwS1jw1yj8vwtwFe3OzZ7IsV-rgk01SQCLcBGAs\/s400\/Probability%2BRules.jpg\" width=\"400\" \/><\/a><\/div>\n<div style=\"font-family: 'segoe ui';\"><b>Bayesian Model<\/b><br \/>Bayesian model of comparison involves the use of probabilities to represent uncertainty in the choice of model along with consistent application of the sum and product rules of probability.<\/p>\n<p><span style=\"color: #cc0000;\"><b>2. STATISTICAL DECISION THEORY<\/b><\/span><\/p>\n<hr \/>\n<p><b>Optimal Decision<\/b><br \/>Optima decision is in decision theory is a decision among the possible alternatives that is closest to the expected result<\/p>\n<p><b>Receiver Operating Characteristic Curve(ROC)<\/b><br \/>This is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied<\/p>\n<p><b>Area Under the ROC Curve<\/b><br \/>The AUC is equal to the probability that a classifier would rank a randomly chosen positive instance higher than a randomly chosen negative one.<\/p>\n<p><span style=\"color: #cc0000;\"><b>3. PHASES OF DATA ANALYSIS USING MACHINE LEARNING, VEDA<\/b><\/span><\/p>\n<hr \/>\n<p>VEDA &#8211; Visual Exploratory Data Analysis<br \/>Exploratory data analysis in statistics and machine learning is a approach to analyzing data sets to summarize the key features of data, most time using visual methods.<\/p>\n<p><b>Study Design<\/b><br \/>Study design is the aspect of Exploratory Data Analysis. A set of methods and procedures used in collecting and analyzing measures of variables specified in the research problem.&nbsp; Design of a study specifies new study type an sub type.<\/p>\n<p><b>Exploratory vs Confirmatory Data Analysis<\/b><br \/>While confirmatory data analysis tests a-priori hypothesis, that is the outcome prediction made before the measurement phase begins,<br \/>Exploratory seeks to generate a-posteriori pattern or other items in the dataset and looks for potential relations between the variables.<\/p>\n<p><b>What is Anomaly Detection?<\/b><br \/>Anomaly detection is the identification of items, events or observations that do not conform to an expected pattern or other items in the dataset. Anomalies are referred to as outliers, novelties, noise, deviations or exceptions. Anomaly detection is carried out using any of the following methods:<\/p>\n<ul>\n<li>Density-based techniques such k-nearest neighbor, local outlier factor<\/li>\n<li>Subspace and correlation based<\/li>\n<li>One-class SVM<\/li>\n<li>Replication neural networks<\/li>\n<li>Cluster-analysis based <\/li>\n<li>Fuzzy logic based<\/li>\n<\/ul>\n<p><span style=\"color: #cc0000;\"><b>4. LINEAR MODELS FOR REGRESSION<\/b><\/span><\/p>\n<hr \/>\n<p>Linear Regression. Linear-in-Parameter models.<br \/>The goal of regression is to predict the value of one or more continuous target variables <span style=\"color: black;\"><i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">t<\/span><\/i><\/span>, given the value of a D-dimensional vector <span style=\"color: black;\"><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><b>x<\/b><\/span><\/span> of the input variables.<br \/>The simplest form of linear regression models are linear functions of the input variables. A more useful class of functions cal be obtained by taking linear combinations of a fixed set of non-linear functions of the input variables, known as the basis function. <br \/>The simplest linear model for regression is one that involves a linear combination of the input variables as shown:<\/p>\n<p><i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">y(x, w) = w<sub>0<\/sub>&nbsp;+ w<sub>1<\/sub>x +&#8230;+ w<sub>D<\/sub>x<sub>D<\/sub><\/span><\/i><br \/>where&nbsp; <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>x = (x<sub>1<\/sub>,..,x<sub>D<\/sub>)T<\/i><\/span><\/p>\n<p><b>Principle of Least Squares (LS)<\/b><br \/>The principle of Least Squares(LS) is a techniques for minimizing the error between a prediction for each data point and the corresponding target value.<br \/>The error function is given as the sum of squares of the errors as shown below:<\/p>\n<div style=\"clear: both; text-align: center;\"><a href=\"https:\/\/1.bp.blogspot.com\/-kGsWH2ALDJE\/WlNbK1nQ-FI\/AAAAAAAAApw\/CGSlr8McGJcj4XXwJf07OH-z-4qyugr7gCLcBGAs\/s1600\/Sum-of-Squares-Error-Function-for-Classification.jpg\" style=\"margin-left: 1em; margin-right: 1em;\"><img decoding=\"async\" loading=\"lazy\" border=\"0\" data-original-height=\"128\" data-original-width=\"501\" height=\"81\" src=\"https:\/\/1.bp.blogspot.com\/-kGsWH2ALDJE\/WlNbK1nQ-FI\/AAAAAAAAApw\/CGSlr8McGJcj4XXwJf07OH-z-4qyugr7gCLcBGAs\/s320\/Sum-of-Squares-Error-Function-for-Classification.jpg\" width=\"320\" \/><\/a><\/div>\n<p><b>Principle of Maximum Likelihood (ML)<\/b><br \/>Maximum likelihood is a procedure for deriving the value of one or more parameters for a given statistic which makes the known likelihood distribution maximum.<\/p>\n<p><b>Principle of Maximum A Posteriori (MAP)<\/b><br \/>Maximum a posteriori probability is an estimate of an unknown quantity that is equal to the mode of the posterior distribution<\/p>\n<p><b>The Least-Square(LS) Solution<\/b><br \/>The least square method in regression analysis tries to approximate the the solution to sets of equations by minimizing the sum of the squares of the residuals made in the results of each equation.<br \/>The residual refers to the difference between the observed value and the fitted value provided by the model.<\/p>\n<p><b>Problem of Overfitting<\/b><br \/>Overfitting is a condition in regression where a statistical model begins to describe the random error in the data rather that the relationship between variables.<br \/>It is the production of an analysis that corresponds too closely to a particular set of data to such extent that it may fail to fit additional data or observations reliably.<\/p>\n<p><span style=\"color: #cc0000;\"><b>5. CLASSIFICATION<\/b><\/span><\/p>\n<hr \/>\n<p>Classification in machine learning has to do with identifying to which class or category a new observation would be assigned to. And this is done on the basis of a training data set containing observations whose class is known.<br \/>An example is to categorize emails into two classes: spam and non-spam. In this case, an incoming email is the observation, while the classes are &#8216;spam&#8217; and &#8216;non-spam&#8217;.<br \/>The goal of classification is to take and input vector x and to assign it to on of K discrete classes <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">C<sub>k<\/sub><\/span><\/i> where <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">k = 1,&#8230;,K. <\/span><\/i><\/p>\n<p><span style=\"color: #cc0000;\"><b>6. NEURAL NETWORKS<\/b><\/span><\/p>\n<hr \/>\n<p>Neural network is an interconnected network of nodes called neurons connected by edges that have assigned weight. Neural networks are designed to mimic the behaviour of the biological network of the human brain.<br \/>Each connection in a neural network transmit signal to the other.<\/p>\n<\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><span style=\"color: #cc0000;\"><b>7. DIMENSIONALITY REDUCTION<\/b><\/span><\/p>\n<hr \/>\n<p>Dimensionality reduction in machine learning is the process applied to reduce teh number of random variables under analysis by obtaining a set of principal variable known as principal components(PC).<br \/>&nbsp;Dimensionality reduction can be divided into two categories: feature selection which tries to find a subset(or features) of the original variables and feature extraction which transforms the data&nbsp; from a higher-dimensional space into fewer dimensions.<\/p>\n<p><b>Principal Component Analysis<\/b><br \/>Principal Component Analysis is an example of feature extraction. PCA performs an eigen decomposition of the co-variance matrix of the original high-dimensional data. The result of this decomposition is a set of eigenvectors and a set of eigen values.<br \/>The eigenvectors that correspond to the largest eigenvalues can then be the principal components.<\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><span style=\"color: #cc0000;\"><b>8. CLUSTERING<\/b><\/span><\/p>\n<hr \/>\n<p>Clustering is a supervised learning method that aims at finding sub-groups within the data that has similar characteristics.<\/p>\n<p><b>K-Means Clustering<\/b><br \/>K-means clustering is clustering method that aims to partition the data into k number of clusters in which each observation belongs to the cluster witht he nearest mean.<\/p>\n<p><b>How it Works<\/b><br \/>Suppose we have a data set of {x1, &#8230; , xN) which is a set of N observations for the variable x. We want to partition the data set into some number K of clusters.<br \/>Lets assume a set of D-dimensional vectors<b style=\"mso-bidi-font-weight: normal;\"><span style=\"font-family: &quot;times new roman&quot; , &quot;serif&quot;; font-size: 14.0pt; line-height: 115%;\"> \u00b5<\/span><\/b>k, where k = 1, &#8230; , K. and <b style=\"mso-bidi-font-weight: normal;\"><span style=\"font-family: &quot;times new roman&quot; , &quot;serif&quot;; font-size: 14.0pt; line-height: 115%;\">\u00b5<\/span><\/b>k is the centroid (or say mean) associated with the kth cluster.<br \/>The goal is to assign the data points to clusters, that is to a set of vectors {<b style=\"mso-bidi-font-weight: normal;\"><span style=\"font-family: &quot;times new roman&quot; , &quot;serif&quot;; font-size: 14.0pt; line-height: 115%;\">\u00b5<\/span><\/b>k} in such a way that the sum of the squares of the distances of each data point to its closest <b style=\"mso-bidi-font-weight: normal;\"><span style=\"font-family: &quot;times new roman&quot; , &quot;serif&quot;; font-size: 14.0pt; line-height: 115%;\">\u00b5<\/span><\/b>k, is minimum.<\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><span style=\"color: #cc0000;\"><b>9 .INDEPENDENT MODELS OF PROBABILITY DISTRIBUTION<\/b><\/span><\/p>\n<hr \/>\n<p>Conditional independence is a concept in probability theory that relates two event. Two events a and b are conditionally dependent given a third even c if the occurrence of a and the occurrence of b are independent events in their conditional probability distribution given c.<br \/>In other words, a and b are conditionally independent given c if and only if, given the knowledge that <i>c<\/i> occurs, knowledge of whether <i><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">a<\/span><\/i> occurs provides no information on the likelihood of <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\">b<\/span> occurring and the knowledge of whether <span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>b<\/i><\/span> occurs provides no information on the likelihood of a occurring.<br \/>This can be represented as follows:<\/p>\n<div style=\"text-align: center;\"><span style=\"font-family: &quot;georgia&quot; , &quot;times new roman&quot; , serif;\"><i>p(a|b,c) = p(a|c)<\/i><\/span><\/div>\n<\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\"><\/div>\n<div style=\"font-family: 'segoe ui';\">This means that a is conditionally independent of b given c<\/p>\n<p><span style=\"color: #cc0000;\"><b>7. FULL BAYESIAN LEARNING: MARKOV CHAIN MONTE CARLO METHODS<\/b><\/span><\/p>\n<hr \/>\n<p>Markov Chain is a&nbsp; model that describes the sequence of possible events in which the probability of each of the&nbsp; event depends only on the state attained in the previous event.<\/p><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>1. PARADIGMS OF LEARNING Interpretation of ProbabilityProbability expresses uncertainty that an even may or may not occur and is a key concept in pattern recognition.Lets &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0},"categories":[14],"tags":[],"_links":{"self":[{"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/posts\/198"}],"collection":[{"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/comments?post=198"}],"version-history":[{"count":0,"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/posts\/198\/revisions"}],"wp:attachment":[{"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/media?parent=198"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/categories?post=198"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kindsonthegenius.com\/blog\/wp-json\/wp\/v2\/tags?post=198"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}