Skip Headers
Oracle® Data Mining Concepts
11
g
Release 2 (11.2)
Part Number E12216-02
Home
Book List
Contents
Master Index
Contact Us
Previous
View PDF
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
A
accuracy,
3.5.2.1
,
5.4.1.1
,
5.4.2
active learning,
18.1.4
algorithms
Apriori,
2.3.2
,
8.3
,
10
Decision Tree,
2.3.1
,
5.5
,
11
Generalized Linear Model,
2.3.1
,
4.4
,
5.5
,
12
k
-Means,
2.3.2
,
7.3
,
13
Minimum Description Length,
2.3.1
,
9.4
,
14
Naive Bayes,
2.3.1
,
5.5
,
15
Non-Negative Matrix Factorization,
2.3.2
,
9.4
,
16
,
16
O-Cluster,
2.3.2
,
7.3
,
17
,
17.1
One-Class Support Vector Machine,
2.3.2
,
6.3
,
18.5
supervised,
2.3.1
,
2.3.1
Support Vector Machine,
2.3.1
,
4.4
,
5.5
,
18
unsupervised,
2.3.2
ALTER_REVERSE_EXPRESSION,
19.4.2.1
anomaly detection,
2.2.2.1
,
2.2.3
,
2.3.2
,
5.4.2
,
6
,
7.1
sample data,
6.2
sample problem,
6.2
scoring,
2.2.2.1
,
6.2.1
,
6.2.2
antecedent,
10.1.1.1
API
See
application programming interface
application programming interface,
2.1
apply
See
scoring
Apriori,
2.3.2
,
20.2.3
area under the curve,
5.3.4.2
artificial intelligence,
2.2
ASSO_MAX_RULE_LENGTH,
8.1.2
ASSO_MIN_CONFIDENCE,
10.1.1.2
ASSO_MIN_SUPPORT,
8.1.3
association rules,
2.2.2.1
,
2.2.3
,
2.3.2
,
8
antecedent and consequent,
10.1.1.1
confidence,
10.1.1.2
,
10.2.2
data,
8.1.1
lift,
10.2.3
sample problem,
8.2
scoring not supported for,
2.2.2.1
,
8.1
support,
8.1.3
,
10.2.1
text mining,
20.2.7
attribute histogram,
7.1.1
,
7.1.4
attribute importance,
2.2.3
,
2.3.1
,
3.5.1
,
9.2
,
14.1
model details,
3.5.1
sample data,
9.2.1
sample problem,
9.2.2
scoring not supported for,
2.2.1.2
,
9.2
attributes,
1.3.1
,
2.2.3
finding the best,
9.1
Automatic Data Preparation,
1.2.2
,
1.3.3
,
1.3.3
,
2.4.1
,
19.1
B
Bayes' Theorem,
5.5
,
15.1
benefits,
5.4.1.3
binary target,
5.1
binning,
Preface
BLAS library,
2.7
blog,
2.6.1
C
case table,
1.1.7
,
1.3.2
,
19.1.1
categorical,
5.1
centroid,
7.1.1
,
7.1.5
CLAS_COST_TABLE_NAME,
5.4.1.3
CLAS_PRIORS_TABLE_NAME,
5.4.2
CLAS_WEIGHTS_TABLE_NAME,
5.4.2
classification,
2.2.3
,
2.3.1
,
5
biasing,
5.4
binary,
5.1
multiclass,
5.1
sample data,
5.2
sample problem,
5.2
scoring,
2.2.1.2
,
5.1
,
5.1
,
5.2
testing,
5.1
,
5.3
text mining,
20.2.4
training,
5.1
CLUS_NUM_CLUSTERS,
7.1.3.2
cluster details,
7.2.1
clustering,
2.2.2.1
,
2.2.3
,
2.3.2
,
7
,
13.1
hierarchical,
7.1.2
sample data,
7.2
sample problem,
7.2
scoring,
2.2.2.1
,
2.2.2.1
,
7.1.6
,
7.2.2
text mining,
20.2.5
clusters
rules,
7.1.3
computational learning,
1.1.6
confidence,
7.1.3.1
,
10.1.1.2
,
10.2.2
association rules,
8.1
defined,
1.1.2
confidence bounds,
12.1.1.3
confusion matrix,
1.3.3
,
5.4.1.1
consequent,
10.1.1.1
cost matrix,
5.4.1
,
5.4.1.3
cost/benefit matrix,
11.1.3.2
costs,
1.3.3
,
5.4.1
,
5.4.1.3
Decision Tree,
5.4.1.3
CREATE_MODEL,
2.5.2
cube,
1.1.6
cumulative gain,
5.3.3.1
cumulative lift,
5.3.3.1
cumulative number of nontargets,
5.3.3.1
cumulative number of targets,
5.3.3.1
cumulative percentage of records,
5.3.3.1
cumulative target density,
5.3.3.1
D
data
automatic preparation,
1.2.2
,
1.3.3
,
19
cleansing,
1.3.2
collection and exploration,
1.3.2
dimensioned,
1.1.6
,
2.4
embedded preparation,
19
highly dimensioned,
9.1
market basket,
Preface
,
1.3.3
multi-record case,
8.1.1
single-record case,
1.1.7
sparse,
10.3
,
10.3
transactional,
8.1.1
,
10.3
unstructured,
20.1
wide,
9.1
data mining
automated,
3
defined,
1.1
Oracle,
2
process,
1.3
data preparation,
1.2.2
,
1.3.2
,
2.4
,
19
automatic,
19
clustering,
7.1
embedded,
1.2.2
,
19
missing values,
19.1
data types,
19.1.2
data warehouse,
1.1.7
date data,
19.1.2.1
DBMS_DATA_MINING,
2.5.2
DBMS_DATA_MINING_TRANSFORM,
2.5.2
,
2.7
DBMS_FREQUENT_ITEMSET,
2.7
DBMS_PREDICTIVE_ANALYTICS,
2.5.2
,
3.3.1
DBMS_STAT_FUNCS,
2.7
Decision Tree,
2.3.1
,
3.5.3
,
5.5
model details,
3.5.3
rules,
3.5.3
,
5.2
demo programs,
2.6.1
deployment,
1.3.4
,
1.3.4
deprecated features,
Preface
descriptive models,
2.2.2
directed learning,
2.2.1
distance-based clustering models,
13.1
DMSYS schema
See
desupported features
documentation,
2.6
E
embedded data preparation,
2.4.1
,
19.1
entropy,
11.1.3.1
equi-width binning,
10.3
Excel,
3.2
EXPLAIN,
2.5.2
,
3.1.3
,
9.2.3
attribute importance,
9.2.3
F
false negatives,
1.3.3
,
5.3.4.4
false positive,
5.3.4.1
,
5.4.1.2
false positive fraction,
5.3.4.4
false positives,
1.3.3
,
5.3.4.4
FEAT_NUM_FEATURES,
9.3
feature extraction,
2.2.3
,
2.3.2
,
9.3
,
20.3
coefficients,
9.3.2
model details,
9.3.2
sample data,
9.3.1
sample problem,
9.3.2
scoring,
2.2.2.1
,
2.2.2.1
,
9.3.3
text mining,
20.2.6
features,
2.2.3
,
9.3
,
20.3
frequent itemsets,
8.1.3
G
Generalized Linear Models,
2.3.1
,
4.4
,
5.5
,
20.2.3
GET_MODEL_DETAILS,
19.4
GET_MODEL_TRANSFORMATIONS,
19.4.1
gini,
11.1.3.1
grid-based clustering models,
17.1
H
hierarchical clustering,
7.1.2
hierarchies,
1.1.6
clusters,
7.1.1
histogram,
7.1.1
,
7.1.4
I
iItemsets,
8.1.2
inductive inference,
1.1.6
J
Java API,
2.5.4
,
3.3.2
K
KDD,
1.1
kernel,
2.1
k
-Means,
2.3.2
,
7.3
,
13.1
,
13.1
,
13.1
,
20.2.3
L
LAPACK library,
2.7
lift,
1.3.3
,
5.3.3
association rules,
10.2.3
sample chart,
5.3.3
statistics,
5.3.3.1
linear regression,
2.3.1
,
4.1.1.1
,
4.4
,
5.5
logistic regression,
2.3.1
,
20.2.4.3
class weights,
5.4.2
M
machine learning,
2.2
market-basket
sample problem,
8.2
market-basket analysis,
8.1
market-basket data,
1.3.3
MDL
See
Minimum Description Length
Mean Absolute Error,
4.3.2.2
Minimum Description Length,
2.3.1
,
3.5.1
,
9.4
,
14.1
Minimum Descriptor Length,
20.2.3
mining functions,
2.2
,
2.2.3
anomaly detection,
2.2.3
,
2.3.2
,
6
association rules,
2.2.3
,
2.3.2
,
8
attribute importance,
2.2.3
,
2.3.1
,
9
,
9.2
,
14.1
classification,
2.2.3
,
2.3.1
,
5
clustering,
2.2.3
,
2.3.2
,
7
feature extraction,
2.2.3
,
2.3.2
,
9
,
9.3
regression,
2.2.3
,
2.3.1
,
4
missing value treatment,
Preface
,
19.1
model details,
1.3.4
,
3.5.1
,
3.5.3
,
11.1.1
models
anomaly detection,
2.2.3
association rules,
2.2.3
attribute importance,
2.2.3
classification,
5.1
clustering,
2.2.3
,
7.1
deploying,
1.3.4
feature extraction,
2.2.3
over fitting,
11.1.3.3
regression,
2.2.3
supervised,
2.2.1
,
2.3.1
,
4
,
5
unsupervised,
2.2.2
multiclass target,
5.1
multicollinearity,
12.1.2
multidimensional analysis,
2.7
multidimensional data,
1.1.6
,
2.7
multiple regression,
4.1.1.2
multivariate linear regression,
4.1.1.2
N
Naive Bayes,
2.3.1
,
5.4.2
,
5.5
,
20.2.3
prior probabilities,
5.4.2
negative class,
5.4.1.2
nested data,
2.4
neural networks,
18.1.1
NMF
See
Non-Negative Matrix Factorization
nonlinear regression,
4.1.1.4
Non-Negative Matrix Factorization,
2.3.2
,
9.4
,
20.2.3
data preparation,
16.2
normalization,
Preface
numerical,
5.1
O
O-Cluster,
2.3.2
,
7.3
,
17.1
OLAP,
1.1.6
,
1.1.6
,
2.7
One-Class SVM,
2.3.2
,
6.3
,
18.5
Oracle Data Miner,
2.5.1
,
4.2
,
4.3.2.3
,
5.2
,
5.4.2
,
7.1.4
,
9.2.2
,
20.3
Oracle Data Mining discussion forum,
2.6.1
Oracle Database analytics,
2.7
Oracle Database kernel,
2.1
Oracle Database statistical functions,
2.7
Oracle Discoverer,
2.7
Oracle OLAP,
2.7
Oracle Portal,
2.7
Oracle Spatial,
2.7
Oracle Spreadsheet Add-In for Predictive Analytics,
2.6.1
,
3.2
,
9.2.3
Oracle Text,
2.7
,
2.7
,
20.5
Orthogonal Partitioning Clustering
See
O-Cluster
outliers,
1.2.2
overfitting,
2.2.1.1
P
PL/SQL API,
2.5.2
positive class,
5.4.1.2
PREDICT,
2.5.2
,
3.1.3
PREDICTION_PROBABILITY,
2.5.3
,
6.2.2
predictive analytics,
2.5.5
,
3
,
9.2.3
accuracy,
3.4
Java API,
3.3.2
PL/SQL API,
3.3.1
See also
EXPLAIN
See also
PREDICT
See also
PROFILE
Spreadsheet Add-In,
2.5.5
,
3.2
predictive confidence,
3.4
,
3.5.2.1
,
4.3.2.3
predictive models,
2.2.1
prior probabilities,
5.4.2
,
5.4.2
probability threshold,
5.3.3.1
,
5.3.4
,
5.3.4.4
PROFILE,
2.5.2
,
3.1.3
pruning,
11.1.3.3
Q
quantile lift,
5.3.3.1
R
radial basis functions,
18.1.1
Receiver Operating Characteristic,
3.5.2
,
5.3.4
receiver operating characteristic
statistics,
5.3.4.2
regression,
2.2.3
,
2.3.1
,
4
defined,
4.1.1
sample build data,
4.2
sample problem,
4.2
scoring,
2.2.1.1
testing,
4.1
,
4.3
training,
4.1
,
4.1
regression coefficients,
4.1.1
,
4.1.1.3
regression parameters,
4.1.1
regularization,
18.1
residual,
4.1.1
residual plot,
4.3.1
reverse transformations,
19.4.2
ridge regression,
12.1.2
ROC
See
receiver operating characteristic
Root Mean Squared Error,
4.3.2
rules,
1.3.4
clusters,
7.1.1
,
7.1.3
confidence,
1.1.2
Decision Tree,
3.5.3
,
11.1.1
PROFILE,
3.5.3
support,
1.1.2
S
score
See
scoring
scoring,
1.3.4
,
1.3.4
,
2.5.3
anomaly detection,
2.2.2.1
,
6.2.1
,
6.2.2
classification,
2.2.1.2
,
5.1
,
5.1
,
5.2
clustering,
2.2.2.1
,
2.2.2.1
,
7.1.6
defined,
1.1.1
feature extraction,
2.2.2.1
not supported for association rules,
2.2.2.1
,
8.1
not supported for attribute importance,
2.2.1.2
,
9.2
real time,
1.3.4
regression,
2.2.1.1
,
4.2
supervised models,
2.2.1.2
unsupervised models,
2.2.2.1
segment,
7.1
single-record case,
1.1.7
singularity,
12.1.2
slope,
4.1.1.1
sparse data,
10.3
,
10.3
,
19.1
Spreadsheet Add-In
See
Oracle Spreadsheet Add-In for Predictive Analytics
SQL data mining functions,
2.5.3
,
2.5.3
star schema,
2.4
statistical functions,
2.7
statistics,
1.1.5
stratified sampling,
5.4.2
supermodel,
2.4.1
,
2.4.1
supervised algorithms,
2.3.1
supervised learning,
2.2.1
support,
7.1.3.1
,
8.1.3
,
10.2.1
association rules,
8.1
defined,
1.1.2
Support Vector Machine,
2.3.1
,
3.5.2
,
5.5
,
20.2.3
active learning,
18.1.4
classification,
2.3.1
Gaussian kernel,
2.3.1
linear kernel,
2.3.1
one class,
2.3.2
regression,
2.3.1
,
4.4
text mining,
20.2.4.3
SVM
See Support Vector Machine
T
target,
2.2.1
,
2.2.2.1
target density,
5.3.3.1
term extraction,
20.3
terms,
20.3
testing
classification model,
5.3
classification models,
5.1
regression model,
4.1
,
4.3
supervised models,
2.2.1.1
text
pre-processing,
20.3
text features,
20.3
text mining
algorithms,
20.2.3
association rules,
20.2.7
classification,
20.2.4
clustering,
20.2.5
data types,
20.2.2
feature extraction,
20.2.6
logistic regression,
20.2.4.3
pre-processing,
20.2
sample data,
20.4
sample problem,
20.4
Support Vector Machine,
20.2.4.3
text terms,
20.3
timestamp data,
19.1.2.1
training,
2.2.1
anomaly detection,
6.1.1
regression model,
4.1
transactional data,
1.3.3
,
8.1.1
,
10.3
transformations,
2.4.1
,
2.5.2
,
19.1
transparency,
11.1.1
,
12.1.1.1
,
19.1
,
19.3.2
,
19.4
true negatives,
5.3.4.4
true positive,
5.3.4.1
,
5.4.1.2
true positive fraction,
5.3.4.4
true positives,
5.3.4.4
U
unstructured data,
2.4
,
20.1
unsupervised algorithms,
2.3.2
unsupervised learning,
2.2.2
UTL_NLA,
2.7
V
Vapnik's theory,
4.4
W
white papers,
2.6.1
wide data,
9.1
X
XML
Decision Tree,
3.5.3
,
11.1.4
PROFILE,
3.5.3
Y
y intercept,
4.1.1.1
Scripting on this page enhances content navigation, but does not change the content in any way.