@prefix rdf: .
@prefix dbr: .
@prefix yago: .
dbr:Principal_component_analysis rdf:type yago:Decomposition106013471 ,
yago:Algebra106012726 ,
yago:PureMathematics106003682 ,
yago:Mathematics106000644 ,
yago:Cognition100023271 ,
yago:Discipline105996646 ,
yago:VectorAlgebra106013298 ,
yago:MatrixDecompositions ,
yago:PsychologicalFeature100023100 .
@prefix owl: .
dbr:Principal_component_analysis rdf:type owl:Thing ,
yago:KnowledgeDomain105999266 ,
yago:Science105999797 ,
yago:Content105809192 ,
yago:Abstraction100002137 .
@prefix dbo: .
dbr:Principal_component_analysis dbo:thumbnail ,
.
@prefix yago-res: .
dbr:Principal_component_analysis owl:sameAs yago-res:Principal_component_analysis .
@prefix dbpedia-fr: .
dbr:Principal_component_analysis owl:sameAs dbpedia-fr:Analyse_en_composantes_principales ,
,
.
@prefix ns7: .
dbr:Principal_component_analysis owl:sameAs ns7:Q2873 ,
,
.
@prefix dbpedia-id: .
dbr:Principal_component_analysis owl:sameAs dbpedia-id:Analisis_komponen_utama ,
.
@prefix dbpedia-nl: .
dbr:Principal_component_analysis owl:sameAs dbpedia-nl:Hoofdcomponentenanalyse ,
,
.
@prefix dbpedia-eu: .
dbr:Principal_component_analysis owl:sameAs dbpedia-eu:Osagai_nagusien_analisi .
@prefix dbpedia-wikidata: .
dbr:Principal_component_analysis owl:sameAs dbpedia-wikidata:Q2873 .
@prefix dbpedia-it: .
dbr:Principal_component_analysis owl:sameAs dbpedia-it:Analisi_delle_componenti_principali .
@prefix dbpedia-de: .
dbr:Principal_component_analysis owl:sameAs dbpedia-de:Hauptkomponentenanalyse .
@prefix foaf: .
@prefix wikipedia-en: .
dbr:Principal_component_analysis foaf:isPrimaryTopicOf wikipedia-en:Principal_component_analysis .
@prefix rdfs: .
dbr:Principal_component_analysis rdfs:comment "The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction; each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible."@en ,
"& Williams, L.J. (2010). \"Principal component analysis\". Wiley Interdisciplinary Reviews: Computational Statistics. 2 (4): 433\u2013459. arXiv:1108.4372. doi:10.1002/wics.101. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors;"@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obta"@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared distances from points to the line. Each subsequent principal component is a direction of a line that minimizes the sum of squared distances and is orthogonal to the first principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variati"@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components, and several related procedures principal component analysis (PCA). Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it minimizes the variance of the projected data or minimizes the sum of squared distances from points to the line. Each subsequent principal component is a direction of a line that minimizes the sum of squared distances and is orthogonal to the first principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variati"@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"In machine learning, principal component analysis (PCA) is a method to project data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data."@en ,
"The principal components of a collection of points in are a sequence of vectors where the element is the direction of a line that best fits the data and is orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from a point to the line. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. PCA is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The principal components of a collection of points in a real n-space are a sequence of direction vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared error to the vector; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components form the orthonor"@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. Similarly, a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. This process defines an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. Similarly, a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. This process defines an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal componen"@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared error to the vector; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components orthonormal basis"@en ,
"The principal components of a collection of points in are a sequence of vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from a point to the line. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. PCA is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data. In some cases, retaining the first few principal components leads to noise reduction, discovering systematic variation, or approximating latent variable models. As one of the most popula"@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA)."@en ,
"Principal component analysis (PCA) is the process of computing the directions (principal components) that align with most of the variation in a set of data points in a multidimensional space, and using some or all of these components to perform a change of basis on the data.Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line, or equivalently that aligns with the largest possible amount of the data variance. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. The ordered collection o"@en ,
"In statistics, principal component analysis (PCA) is a method to project data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal compo"@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ,
"The principal components of a collection of points in a real p-space that are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the distance to the vector for all data points; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components form the "@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components and several related procedures principal component analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest."@en ;
rdfs:label "Principal component analysis"@en ;
rdfs:seeAlso dbr:Portfolio_optimization ;
dbo:abstract "The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. By construction, the principle components form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction i.e. by projecting each data point onto only the first few principal components. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction that maximizes the variance of the projected data and is orthogonal to the first principal components. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in are a sequence of vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from a point to the line. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. PCA is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real n-space are a sequence of direction vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in are a sequence of vectors where the element is the direction of a line that best fits the data and is orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from a point to the line. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. PCA is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. Similarly, a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. This process defines an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"In machine learning, principal component analysis (PCA) is a method to project data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction; each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components and several related procedures principal component analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the distance to the vector for all data points; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components form the orthonormal basis vectors that define the new space, in which the dimensions are uncorrelated. From either the maximum-variance or minimum-square-distance objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared error to the vector; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction i.e. by projecting each data point onto only the first few principal components. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The ith principal component is the direction that maximizes the variance of the projected data and is orthogonal to the first i-1 principal components. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using an eigendecomposition of the data covariance matrix or SVD of the data matrix. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared distances from points to the line. Each subsequent principal component is a direction of a line that minimizes the sum of squared distances and is orthogonal to the first principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. By construction, the principal components form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthonormal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. Similarly, a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. This process defines an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it minimizes the variance of the projected data or minimizes the sum of squared distances from points to the line. Each subsequent principal component is a direction of a line that minimizes the sum of squared distances and is orthogonal to the first principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. By construction, the principal components form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the element is the direction of a line that best fits the data while being orthogonal to the first elements. Here, a best-fitting line is defined as a line that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"& Williams, L.J. (2010). \"Principal component analysis\". Wiley Interdisciplinary Reviews: Computational Statistics. 2 (4): 433\u2013459. arXiv:1108.4372. doi:10.1002/wics.101. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared error to the vector; the subsequent principal components are calculated similarly and are orthogonal to the previous principal components. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The principal components form the orthonormal basis vectors that define the new space, in which the dimensions are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components, and several related procedures principal component analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done in the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix or by singular value decomposition of a design matrix. Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting each data value from its variable's measured mean so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction i.e. by projecting each data point onto only the first few principal components. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components, and several related procedures principal component analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting each data value from its variable's measured mean so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data. In some cases, retaining the first few principal components leads to noise reduction, discovering systematic variation, or approximating latent variable models. As one of the most popular multivariate methods, its applications span from signal processing and finance to neuroscience and genomics. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space that are a sequence of direction vectors, where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. For a collection of points in and , a direction for the best-fitting line can be chosen from directions perpendicular to the first best-fitting lines. These directions comprise an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Usually, PCA refers to the process of computing the principal components and using them to perform a change of basis on the data, sometimes only using the first few principal components and ignoring the rest. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction of a line that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. By construction, the principal components form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The principal components of a collection of points in a real p-space are a sequence of direction vectors where the vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Principal component analysis (PCA) is the process of computing the directions (principal components) that align with most of the variation in a set of data points in a multidimensional space, and using some or all of these components to perform a change of basis on the data.Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line, or equivalently that aligns with the largest possible amount of the data variance. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. The ordered collection of basis vectors are called principal components. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction i.e. by projecting each data point onto only the first few principal components. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"In statistics, principal component analysis (PCA) is a method to project data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension. Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"The first principal component of a set of data points in a multidimensional space is the direction vector that best fits the data, in that it maximizes the variance of the projected data or minimizes the sum of squared residuals orthogonal to the direction (that is, minimizes the sum or average of squared distances from the line through the origin in the direction of the direction vector). Each subsequent principal component is a direction orthogonal to the first that best fits the residual. Principal component analysis or PCA is the process of finding or using such components. PCA is a statistical tool used in exploratory data analysis and in predictive modeling. It is commonly used for dimensionality reduction by projecting each data point onto only the first several principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. By construction, the principal components form an orthonormal basis in which different individual dimensions of the data are uncorrelated. From either the maximum-variance or minimum-square-residual objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components, and several related procedures principal component analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA is either done by singular value decomposition of a design matrix or by doing the following 2 steps: 1. \n* calculating the data covariance (or correlation) matrix of the original data 2. \n* performing eigenvalue decomposition on the covariance matrix Usually the original data is normalized before performing the PCA. The normalization of each attribute consists of mean centering \u2013 subtracting its variable's measured mean from each data value so that its empirical mean (average) is zero. Some fields, in addition to normalizing the mean, do so for each variable's variance (to make it equal to 1); see z-scores. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components, and several related procedures Principal Component Analysis (PCA). PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction i.e. by projecting each data point onto only the first few principal components. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The ith principal component can be taken as a direction that maximizes the variance of the projected data and is orthogonal to the first i-1 principal components. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using an eigendecomposition of the data covariance matrix or SVD of the data matrix. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, (\"normalized\") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed."@en ,
"Given a collection of points in two, three, or higher dimensional space, a \"best fitting\" line can be defined as one that minimizes the average squared perpendicular distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called Principal Components and several related procedures Principal Component Analysis (PCA). Often, PCA refers to the process of computing the principal components and using some or all of them to perform a change of basis on the data. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. From either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, principal components are often computed using eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.Robust and L1-norm-based variants of standard PCA have also been proposed."@en ;
dbo:wikiPageEditLink ;
dbo:wikiPageExternalLink ,
,
.
@prefix ns17: .
dbr:Principal_component_analysis dbo:wikiPageExternalLink ns17:math-php ,
,
,
.
@prefix ns18: .
dbr:Principal_component_analysis dbo:wikiPageExternalLink ns18:b98835 ,
,
.
@prefix xsd: .
dbr:Principal_component_analysis dbo:wikiPageExtracted "2020-12-10T15:55:41Z"^^xsd:dateTime ,
"2020-11-17T19:55:54Z"^^xsd:dateTime ,
"2020-04-26T07:37:54Z"^^xsd:dateTime ,
"2020-08-07T20:13:17Z"^^xsd:dateTime ,
"2020-08-26T01:11:20Z"^^xsd:dateTime ,
"2020-07-24T08:29:32Z"^^xsd:dateTime ,
"2020-08-15T23:06:07Z"^^xsd:dateTime ,
"2020-09-02T08:19:47Z"^^xsd:dateTime ,
"2020-07-04T17:53:18Z"^^xsd:dateTime ,
"2020-08-22T23:01:23Z"^^xsd:dateTime ,
"2020-08-26T01:29:04Z"^^xsd:dateTime ,
"2020-12-16T17:20:09Z"^^xsd:dateTime ,
"2020-08-26T00:21:26Z"^^xsd:dateTime ,
"2020-11-20T16:46:34Z"^^xsd:dateTime ,
"2020-07-19T21:00:07Z"^^xsd:dateTime ,
"2021-04-15T17:39:47Z"^^xsd:dateTime ,
"2020-08-26T01:31:52Z"^^xsd:dateTime ,
"2020-11-20T17:49:34Z"^^xsd:dateTime ,
"2020-05-06T21:36:19Z"^^xsd:dateTime ,
"2020-08-25T22:16:22Z"^^xsd:dateTime ,
"2020-09-19T14:39:32Z"^^xsd:dateTime ,
"2020-08-22T17:43:39Z"^^xsd:dateTime ,
"2020-11-29T03:51:32Z"^^xsd:dateTime ,
"2020-08-21T15:55:54Z"^^xsd:dateTime ,
"2020-12-10T16:10:31Z"^^xsd:dateTime ,
"2020-12-29T01:00:43Z"^^xsd:dateTime ,
"2020-08-26T00:22:26Z"^^xsd:dateTime ,
"2020-06-09T19:48:44Z"^^xsd:dateTime ,
"2020-06-09T03:54:22Z"^^xsd:dateTime ,
"2020-05-04T13:43:04Z"^^xsd:dateTime ,
"2020-12-30T23:43:09Z"^^xsd:dateTime ,
"2021-03-08T16:34:31Z"^^xsd:dateTime ,
"2020-08-21T20:38:25Z"^^xsd:dateTime ,
"2020-09-03T16:30:23Z"^^xsd:dateTime ,
"2020-08-22T16:23:41Z"^^xsd:dateTime ,
"2020-08-19T22:24:46Z"^^xsd:dateTime ,
"2021-02-21T15:06:50Z"^^xsd:dateTime ,
"2020-08-07T21:44:13Z"^^xsd:dateTime ,
"2020-12-28T12:59:12Z"^^xsd:dateTime ,
"2020-06-27T23:24:37Z"^^xsd:dateTime ,
"2020-08-19T23:14:34Z"^^xsd:dateTime ,
"2020-05-14T12:52:18Z"^^xsd:dateTime ,
"2020-08-22T23:02:23Z"^^xsd:dateTime ,
"2020-05-22T11:51:54Z"^^xsd:dateTime ,
"2020-11-17T20:19:12Z"^^xsd:dateTime ,
"2020-08-24T07:59:29Z"^^xsd:dateTime ,
"2020-12-10T15:59:38Z"^^xsd:dateTime ,
"2020-12-01T16:30:29Z"^^xsd:dateTime ,
"2020-12-27T14:47:29Z"^^xsd:dateTime ,
"2020-08-26T00:35:07Z"^^xsd:dateTime ,
"2020-08-22T23:55:46Z"^^xsd:dateTime ,
"2020-08-18T16:57:32Z"^^xsd:dateTime ,
"2020-08-03T20:45:40Z"^^xsd:dateTime ,
"2021-01-12T14:59:18Z"^^xsd:dateTime ,
"2020-07-02T23:30:38Z"^^xsd:dateTime ,
"2020-08-20T16:09:35Z"^^xsd:dateTime ,
"2020-08-20T14:44:19Z"^^xsd:dateTime ,
"2020-09-18T05:31:13Z"^^xsd:dateTime ,
"2020-08-26T01:50:06Z"^^xsd:dateTime ,
"2020-08-21T22:46:56Z"^^xsd:dateTime ,
"2020-08-24T16:59:36Z"^^xsd:dateTime ,
"2020-07-23T21:07:22Z"^^xsd:dateTime ,
"2020-12-10T16:09:20Z"^^xsd:dateTime ,
"2020-05-15T22:30:56Z"^^xsd:dateTime ,
"2020-08-19T15:39:06Z"^^xsd:dateTime ,
"2020-08-19T22:49:29Z"^^xsd:dateTime ,
"2020-08-22T13:17:03Z"^^xsd:dateTime ,
"2020-12-27T20:56:53Z"^^xsd:dateTime ,
"2021-02-22T08:42:40Z"^^xsd:dateTime ,
"2020-09-19T14:43:59Z"^^xsd:dateTime ,
"2020-08-11T18:35:53Z"^^xsd:dateTime ,
"2020-08-26T00:31:12Z"^^xsd:dateTime ,
"2020-08-26T03:47:37Z"^^xsd:dateTime ,
"2021-02-22T08:43:15Z"^^xsd:dateTime ,
"2020-09-19T14:46:26Z"^^xsd:dateTime ,
"2020-08-24T15:35:41Z"^^xsd:dateTime ,
"2020-08-25T23:58:46Z"^^xsd:dateTime ,
"2020-08-22T16:23:54Z"^^xsd:dateTime ,
"2020-12-10T16:00:52Z"^^xsd:dateTime ,
"2020-10-19T23:27:29Z"^^xsd:dateTime ,
"2020-08-22T20:10:21Z"^^xsd:dateTime ,
"2020-08-20T02:35:57Z"^^xsd:dateTime ,
"2021-01-17T21:18:50Z"^^xsd:dateTime ,
"2020-11-05T00:35:45Z"^^xsd:dateTime ,
"2020-08-18T17:05:21Z"^^xsd:dateTime ,
"2021-02-22T08:43:10Z"^^xsd:dateTime ,
"2021-01-12T21:28:11Z"^^xsd:dateTime ,
"2020-08-20T14:40:00Z"^^xsd:dateTime ,
"2020-08-20T02:31:09Z"^^xsd:dateTime ,
"2020-12-01T16:31:19Z"^^xsd:dateTime ,
"2020-08-03T15:26:59Z"^^xsd:dateTime ,
"2020-08-23T17:36:38Z"^^xsd:dateTime ,
"2020-08-19T07:22:30Z"^^xsd:dateTime ,
"2020-12-11T02:24:13Z"^^xsd:dateTime ,
"2020-11-30T14:42:53Z"^^xsd:dateTime ,
"2020-09-19T14:37:47Z"^^xsd:dateTime ,
"2021-04-15T17:39:55Z"^^xsd:dateTime ,
"2020-11-20T17:48:57Z"^^xsd:dateTime ,
"2020-05-14T12:54:18Z"^^xsd:dateTime ,
"2020-09-23T14:30:03Z"^^xsd:dateTime ,
"2020-11-20T17:47:34Z"^^xsd:dateTime ,
"2020-09-02T08:41:41Z"^^xsd:dateTime ,
"2020-08-20T02:56:08Z"^^xsd:dateTime ,
"2020-08-03T15:27:45Z"^^xsd:dateTime ,
"2020-08-30T03:58:50Z"^^xsd:dateTime ,
"2020-08-22T23:04:02Z"^^xsd:dateTime ,
"2020-08-21T20:47:23Z"^^xsd:dateTime ,
"2020-09-18T13:35:27Z"^^xsd:dateTime ,
"2020-08-24T08:39:27Z"^^xsd:dateTime ,
"2020-08-23T17:31:27Z"^^xsd:dateTime ,
"2021-03-30T10:49:26Z"^^xsd:dateTime ,
"2020-11-17T19:59:03Z"^^xsd:dateTime ,
"2020-08-11T18:39:06Z"^^xsd:dateTime ,
"2020-08-23T17:40:21Z"^^xsd:dateTime ,
"2020-12-10T16:03:36Z"^^xsd:dateTime ,
"2020-08-11T13:04:12Z"^^xsd:dateTime ,
"2020-07-19T20:59:09Z"^^xsd:dateTime ,
"2020-12-27T14:40:50Z"^^xsd:dateTime ,
"2020-11-20T17:50:11Z"^^xsd:dateTime ,
"2020-08-26T20:03:41Z"^^xsd:dateTime ,
"2020-08-19T14:17:39Z"^^xsd:dateTime ,
"2020-12-03T21:45:22Z"^^xsd:dateTime ,
"2020-08-23T17:48:25Z"^^xsd:dateTime ,
"2020-08-23T18:14:10Z"^^xsd:dateTime ,
"2020-08-23T17:30:41Z"^^xsd:dateTime ,
"2020-07-04T17:53:53Z"^^xsd:dateTime ,
"2020-08-19T16:16:13Z"^^xsd:dateTime ,
"2020-05-23T18:12:52Z"^^xsd:dateTime ,
"2020-06-01T19:02:47Z"^^xsd:dateTime ,
"2021-02-13T06:43:09Z"^^xsd:dateTime ,
"2020-08-22T18:18:06Z"^^xsd:dateTime ,
"2020-12-01T00:11:42Z"^^xsd:dateTime ,
"2020-12-29T01:04:20Z"^^xsd:dateTime ,
"2020-08-19T04:08:20Z"^^xsd:dateTime ,
"2020-11-20T17:32:26Z"^^xsd:dateTime ,
"2020-08-22T13:32:32Z"^^xsd:dateTime ,
"2020-08-21T22:47:29Z"^^xsd:dateTime ,
"2020-12-10T18:43:48Z"^^xsd:dateTime ;
dbo:wikiPageHistoryLink ;
dbo:wikiPageID 76340 ;
dbo:wikiPageLength "92183"^^xsd:nonNegativeInteger ,
"92185"^^xsd:nonNegativeInteger ,
"92188"^^xsd:nonNegativeInteger ,
"92189"^^xsd:nonNegativeInteger ,
"92176"^^xsd:nonNegativeInteger ,
"92177"^^xsd:nonNegativeInteger ,
"92181"^^xsd:nonNegativeInteger ,
"92203"^^xsd:nonNegativeInteger ,
"92190"^^xsd:nonNegativeInteger ,
"92191"^^xsd:nonNegativeInteger ,
"92215"^^xsd:nonNegativeInteger ,
"92217"^^xsd:nonNegativeInteger ,
"92219"^^xsd:nonNegativeInteger ,
"92210"^^xsd:nonNegativeInteger ,
"92211"^^xsd:nonNegativeInteger ,
"91321"^^xsd:nonNegativeInteger ,
"93413"^^xsd:nonNegativeInteger ,
"93436"^^xsd:nonNegativeInteger ,
"91335"^^xsd:nonNegativeInteger ,
"93425"^^xsd:nonNegativeInteger ,
"92523"^^xsd:nonNegativeInteger ,
"91330"^^xsd:nonNegativeInteger ,
"91355"^^xsd:nonNegativeInteger ,
"93465"^^xsd:nonNegativeInteger ,
"93457"^^xsd:nonNegativeInteger ,
"93461"^^xsd:nonNegativeInteger ,
"93470"^^xsd:nonNegativeInteger ,
"91121"^^xsd:nonNegativeInteger ,
"93487"^^xsd:nonNegativeInteger ,
"92583"^^xsd:nonNegativeInteger ,
"92579"^^xsd:nonNegativeInteger ,
"92580"^^xsd:nonNegativeInteger ,
"92394"^^xsd:nonNegativeInteger ,
"92383"^^xsd:nonNegativeInteger ,
"92408"^^xsd:nonNegativeInteger ,
"92404"^^xsd:nonNegativeInteger ,
"92423"^^xsd:nonNegativeInteger ,
"92424"^^xsd:nonNegativeInteger ,
"93859"^^xsd:nonNegativeInteger ,
"93860"^^xsd:nonNegativeInteger ,
"91527"^^xsd:nonNegativeInteger ,
"92477"^^xsd:nonNegativeInteger ,
"91538"^^xsd:nonNegativeInteger ,
"92491"^^xsd:nonNegativeInteger ,
"92478"^^xsd:nonNegativeInteger ,
"91554"^^xsd:nonNegativeInteger ,
"92483"^^xsd:nonNegativeInteger ,
"91580"^^xsd:nonNegativeInteger ,
"92506"^^xsd:nonNegativeInteger ,
"91572"^^xsd:nonNegativeInteger ,
"91590"^^xsd:nonNegativeInteger ,
"91591"^^xsd:nonNegativeInteger ,
"91595"^^xsd:nonNegativeInteger ,
"91596"^^xsd:nonNegativeInteger ,
"95148"^^xsd:nonNegativeInteger ,
"91612"^^xsd:nonNegativeInteger ,
"95139"^^xsd:nonNegativeInteger ,
"91359"^^xsd:nonNegativeInteger ,
"98728"^^xsd:nonNegativeInteger ,
"91380"^^xsd:nonNegativeInteger ,
"91401"^^xsd:nonNegativeInteger ,
"91441"^^xsd:nonNegativeInteger ,
"92687"^^xsd:nonNegativeInteger ,
"91818"^^xsd:nonNegativeInteger ,
"91848"^^xsd:nonNegativeInteger ,
"91852"^^xsd:nonNegativeInteger ,
"91634"^^xsd:nonNegativeInteger ,
"91713"^^xsd:nonNegativeInteger ,
"91716"^^xsd:nonNegativeInteger ,
"91729"^^xsd:nonNegativeInteger ,
"91730"^^xsd:nonNegativeInteger ,
"92022"^^xsd:nonNegativeInteger ,
"95594"^^xsd:nonNegativeInteger ,
"92046"^^xsd:nonNegativeInteger ,
"92071"^^xsd:nonNegativeInteger ,
"92072"^^xsd:nonNegativeInteger ,
"92073"^^xsd:nonNegativeInteger ,
"92065"^^xsd:nonNegativeInteger ,
"92102"^^xsd:nonNegativeInteger ,
"92107"^^xsd:nonNegativeInteger ,
"92112"^^xsd:nonNegativeInteger ,
"89773"^^xsd:nonNegativeInteger ,
"93403"^^xsd:nonNegativeInteger ,
"91976"^^xsd:nonNegativeInteger ,
"91978"^^xsd:nonNegativeInteger ,
"91057"^^xsd:nonNegativeInteger ,
"91987"^^xsd:nonNegativeInteger ,
"91988"^^xsd:nonNegativeInteger ,
"92279"^^xsd:nonNegativeInteger ,
"92317"^^xsd:nonNegativeInteger ,
"92306"^^xsd:nonNegativeInteger ,
"92349"^^xsd:nonNegativeInteger ,
"92362"^^xsd:nonNegativeInteger ,
"92355"^^xsd:nonNegativeInteger ,
"92356"^^xsd:nonNegativeInteger ,
"92376"^^xsd:nonNegativeInteger ,
"92366"^^xsd:nonNegativeInteger ,
"92154"^^xsd:nonNegativeInteger ,
"92155"^^xsd:nonNegativeInteger ;
dbo:wikiPageModified "2020-08-11T18:39:04Z"^^xsd:dateTime ,
"2020-05-15T22:30:53Z"^^xsd:dateTime ,
"2020-07-23T21:07:18Z"^^xsd:dateTime ,
"2020-08-19T07:22:23Z"^^xsd:dateTime ,
"2020-08-21T20:38:21Z"^^xsd:dateTime ,
"2020-10-19T23:27:22Z"^^xsd:dateTime ,
"2020-08-20T16:09:32Z"^^xsd:dateTime ,
"2021-01-12T14:59:13Z"^^xsd:dateTime ,
"2020-12-29T01:00:36Z"^^xsd:dateTime ,
"2020-09-19T14:37:40Z"^^xsd:dateTime ,
"2020-08-22T17:43:34Z"^^xsd:dateTime ,
"2020-08-19T23:14:30Z"^^xsd:dateTime ,
"2020-08-11T18:35:49Z"^^xsd:dateTime ,
"2020-11-29T03:51:23Z"^^xsd:dateTime ,
"2020-06-09T03:54:16Z"^^xsd:dateTime ,
"2020-09-02T08:41:33Z"^^xsd:dateTime ,
"2020-08-22T23:02:19Z"^^xsd:dateTime ,
"2020-08-19T16:16:10Z"^^xsd:dateTime ,
"2020-08-23T17:48:20Z"^^xsd:dateTime ,
"2021-04-15T17:39:42Z"^^xsd:dateTime ,
"2020-11-20T16:46:30Z"^^xsd:dateTime ,
"2020-07-02T23:30:32Z"^^xsd:dateTime ,
"2020-08-19T22:24:40Z"^^xsd:dateTime ,
"2020-05-23T18:12:49Z"^^xsd:dateTime ,
"2020-08-19T04:08:15Z"^^xsd:dateTime ,
"2020-08-19T22:49:25Z"^^xsd:dateTime ,
"2020-05-14T12:52:13Z"^^xsd:dateTime ,
"2020-07-04T17:53:47Z"^^xsd:dateTime ,
"2020-12-30T23:43:04Z"^^xsd:dateTime ,
"2020-12-10T16:10:26Z"^^xsd:dateTime ,
"2020-07-19T21:00:02Z"^^xsd:dateTime ,
"2020-08-26T03:47:33Z"^^xsd:dateTime ,
"2020-09-19T14:43:49Z"^^xsd:dateTime ,
"2020-12-29T01:04:15Z"^^xsd:dateTime ,
"2020-05-04T13:42:59Z"^^xsd:dateTime ,
"2021-03-08T16:34:19Z"^^xsd:dateTime ,
"2020-08-22T13:32:27Z"^^xsd:dateTime ,
"2020-08-21T22:46:49Z"^^xsd:dateTime ,
"2020-06-09T19:48:39Z"^^xsd:dateTime ,
"2020-08-07T21:44:08Z"^^xsd:dateTime ,
"2020-11-05T00:35:41Z"^^xsd:dateTime ,
"2020-08-23T17:36:34Z"^^xsd:dateTime ,
"2020-08-22T16:23:50Z"^^xsd:dateTime ,
"2020-08-26T01:31:49Z"^^xsd:dateTime ,
"2020-12-27T14:40:44Z"^^xsd:dateTime ,
"2020-08-20T14:39:55Z"^^xsd:dateTime ,
"2020-08-22T16:23:33Z"^^xsd:dateTime ,
"2020-09-03T16:30:21Z"^^xsd:dateTime ,
"2020-08-21T15:55:47Z"^^xsd:dateTime ,
"2020-08-22T20:10:17Z"^^xsd:dateTime ,
"2020-12-01T16:31:11Z"^^xsd:dateTime ,
"2020-08-03T20:45:36Z"^^xsd:dateTime ,
"2020-06-01T19:02:42Z"^^xsd:dateTime ,
"2020-08-20T02:56:04Z"^^xsd:dateTime ,
"2020-11-20T17:32:17Z"^^xsd:dateTime ,
"2020-08-19T15:39:03Z"^^xsd:dateTime ,
"2020-08-23T17:40:17Z"^^xsd:dateTime ,
"2020-08-20T02:35:53Z"^^xsd:dateTime ,
"2020-12-10T16:09:15Z"^^xsd:dateTime ,
"2020-08-22T23:01:21Z"^^xsd:dateTime ,
"2021-03-30T10:49:21Z"^^xsd:dateTime ,
"2020-08-23T17:31:20Z"^^xsd:dateTime ,
"2020-09-02T08:19:37Z"^^xsd:dateTime ,
"2020-12-10T16:03:29Z"^^xsd:dateTime ,
"2020-08-20T14:44:14Z"^^xsd:dateTime ,
"2020-09-19T14:39:25Z"^^xsd:dateTime ,
"2020-09-18T13:35:19Z"^^xsd:dateTime ,
"2020-08-26T01:50:03Z"^^xsd:dateTime ,
"2020-08-03T15:26:53Z"^^xsd:dateTime ,
"2020-08-22T18:17:58Z"^^xsd:dateTime ,
"2021-01-12T21:28:05Z"^^xsd:dateTime ,
"2020-08-30T03:58:46Z"^^xsd:dateTime ,
"2020-08-26T00:31:08Z"^^xsd:dateTime ,
"2020-08-24T15:35:38Z"^^xsd:dateTime ,
"2020-12-11T02:24:08Z"^^xsd:dateTime ,
"2020-11-17T19:58:59Z"^^xsd:dateTime ,
"2020-09-19T14:46:15Z"^^xsd:dateTime ,
"2020-07-24T08:29:27Z"^^xsd:dateTime ,
"2020-08-24T16:59:31Z"^^xsd:dateTime ,
"2020-08-26T00:22:24Z"^^xsd:dateTime ,
"2020-08-03T15:27:41Z"^^xsd:dateTime ,
"2020-12-10T18:43:45Z"^^xsd:dateTime ,
"2020-08-07T20:13:14Z"^^xsd:dateTime ,
"2020-08-20T02:31:04Z"^^xsd:dateTime ,
"2021-01-17T21:18:45Z"^^xsd:dateTime ,
"2020-11-20T17:48:50Z"^^xsd:dateTime ,
"2020-08-23T18:14:06Z"^^xsd:dateTime ,
"2020-05-06T21:36:12Z"^^xsd:dateTime ,
"2020-08-22T23:03:58Z"^^xsd:dateTime ,
"2020-08-23T17:30:37Z"^^xsd:dateTime ,
"2021-02-22T08:42:33Z"^^xsd:dateTime ,
"2020-12-03T21:45:17Z"^^xsd:dateTime ,
"2020-12-27T14:47:21Z"^^xsd:dateTime ,
"2020-08-26T00:35:04Z"^^xsd:dateTime ,
"2020-08-18T17:05:16Z"^^xsd:dateTime ,
"2020-12-16T17:20:01Z"^^xsd:dateTime ,
"2021-02-21T15:06:38Z"^^xsd:dateTime ,
"2020-07-19T20:59:01Z"^^xsd:dateTime ,
"2021-02-13T06:43:04Z"^^xsd:dateTime ,
"2020-11-20T17:47:29Z"^^xsd:dateTime ,
"2020-11-17T20:19:04Z"^^xsd:dateTime ,
"2020-05-22T11:51:48Z"^^xsd:dateTime ,
"2020-08-15T23:06:00Z"^^xsd:dateTime ,
"2020-08-18T16:57:29Z"^^xsd:dateTime ,
"2020-11-20T17:50:05Z"^^xsd:dateTime ,
"2020-09-23T14:29:57Z"^^xsd:dateTime ,
"2020-08-11T13:04:05Z"^^xsd:dateTime ,
"2020-08-21T20:47:19Z"^^xsd:dateTime ,
"2020-08-26T00:21:23Z"^^xsd:dateTime ,
"2020-11-20T17:49:26Z"^^xsd:dateTime ,
"2020-04-26T07:37:49Z"^^xsd:dateTime ,
"2020-09-18T05:31:05Z"^^xsd:dateTime ,
"2020-11-30T14:42:46Z"^^xsd:dateTime ,
"2020-12-27T20:56:50Z"^^xsd:dateTime ,
"2020-12-28T12:59:03Z"^^xsd:dateTime ,
"2020-12-01T16:30:22Z"^^xsd:dateTime ,
"2021-02-22T08:43:05Z"^^xsd:dateTime ,
"2020-08-26T20:03:39Z"^^xsd:dateTime ,
"2020-08-22T23:55:43Z"^^xsd:dateTime ,
"2020-08-22T13:16:55Z"^^xsd:dateTime ,
"2020-08-25T22:16:18Z"^^xsd:dateTime ,
"2020-08-26T01:11:16Z"^^xsd:dateTime ,
"2020-08-24T08:39:21Z"^^xsd:dateTime ,
"2020-12-10T15:59:06Z"^^xsd:dateTime ,
"2020-08-26T01:29:01Z"^^xsd:dateTime ,
"2020-08-21T22:47:24Z"^^xsd:dateTime ,
"2020-06-27T23:24:34Z"^^xsd:dateTime ,
"2020-12-10T15:55:36Z"^^xsd:dateTime ,
"2021-04-15T17:39:48Z"^^xsd:dateTime ,
"2020-08-19T14:17:33Z"^^xsd:dateTime ,
"2020-08-25T23:58:41Z"^^xsd:dateTime ,
"2020-08-24T07:59:24Z"^^xsd:dateTime ,
"2020-12-10T16:00:45Z"^^xsd:dateTime ,
"2020-07-04T17:53:13Z"^^xsd:dateTime ,
"2020-11-17T19:55:49Z"^^xsd:dateTime ,
"2020-05-14T12:54:12Z"^^xsd:dateTime ;
dbo:wikiPageOutDegree "242"^^xsd:nonNegativeInteger ,
"250"^^xsd:nonNegativeInteger ,
"251"^^xsd:nonNegativeInteger ,
"253"^^xsd:nonNegativeInteger ,
"246"^^xsd:nonNegativeInteger ,
"247"^^xsd:nonNegativeInteger ,
"248"^^xsd:nonNegativeInteger ,
"249"^^xsd:nonNegativeInteger ,
"258"^^xsd:nonNegativeInteger ,
"254"^^xsd:nonNegativeInteger ,
"255"^^xsd:nonNegativeInteger ,
"256"^^xsd:nonNegativeInteger ,
"257"^^xsd:nonNegativeInteger ,
"270"^^xsd:nonNegativeInteger ;
dbo:wikiPageRevisionID 954821721 ,
973847964 ,
961671758 ,
989229112 ,
960216601 ,
979222698 ,
974231984 ,
996642327 ,
974559132 ,
991744951 ,
993432718 ,
956636320 ,
974248585 ,
974420786 ,
973795863 ,
974553294 ,
974551866 ,
993431328 ,
991259503 ,
994616921 ,
974967736 ,
971732598 ,
997318289 ,
969180922 ,
956898470 ,
991744830 ,
974665461 ,
968511171 ,
999968044 ,
975112775 ,
973852908 ,
974233129 ,
974554612 ,
961551822 ,
989726692 ,
993458909 ,
973929524 ,
974248520 ,
974004149 ,
966000591 ,
993430328 ,
989718464 ,
974345608 ,
973837389 ,
976310597 ,
974979719 ,
996589744 ,
974959409 ,
974015766 ,
989726501 ,
974939608 ,
993430664 ,
999903274 ,
974421083 ,
1011022836 ,
953212864 ,
989726860 ,
974401069 ,
974661004 ,
971718714 ,
974724767 ,
996875275 ,
965709833 ,
974370350 ,
989229612 ,
973908503 ,
1006506313 ,
974370378 ,
979223220 ,
970990574 ,
974193265 ,
972377968 ,
993429660 ,
974973808 ,
958420147 ,
966000674 ,
974381064 ,
976313490 ,
968511315 ,
974976477 ,
973200138 ,
989726781 ,
993527388 ,
1017983713 ,
989724469 ,
974426329 ,
1017983738 ,
955270164 ,
973906300 ,
1015045599 ,
987114619 ,
971040099 ,
973903989 ,
958193193 ,
973686978 ,
974385997 ,
974004662 ,
1008241666 ,
974964895 ,
989232590 ,
979223547 ,
1008241710 ,
975737308 ,
974967046 ,
979222430 ,
974551740 ,
974552669 ,
970990707 ,
973928998 ,
972378374 ,
991524702 ,
973685956 ,
979002761 ,
964850741 ,
1008095491 ,
973775747 ,
974347480 ,
993432491 ,
974713533 ,
992171172 ,
996755668 ,
956636114 ,
1001016426 ,
974995555 ,
974976959 ,
973931764 ,
974965152 ,
996590533 ,
976555851 ,
984408123 ,
974420900 ,
969250096 ,
979050176 ,
972326901 ,
996876222 ,
979916858 ;
dbo:wikiPageRevisionLink ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
.
@prefix dbp: .
@prefix dbt: .
dbr:Principal_component_analysis dbp:wikiPageUsesTemplate dbt:Page_needed ,
dbt:Isbn ,
dbt:Machine_learning_bar ,
dbt:Reflist ,
dbt:Authority_control ,
dbt:Short_description ,
dbt:Citation_needed ,
dbt:Statistics ,
dbt:Clarify ,
dbt:See_also ,
dbt:Cite_book ,
dbt:Cite_journal ,
dbt:Div_col ,
dbt:Div_col_end ,
dbt:Mvar ,
dbt:Commons_category ,
dbt:YouTube ,
dbt:Cn ,
dbt:Main ,
dbt:Math .
@prefix dct: .
@prefix dbc: .
dbr:Principal_component_analysis dct:subject dbc:Dimension_reduction ,
dbc:Matrix_decompositions ;
foaf:depiction ,
.