KDD Nuggets Index


To KD Mine: main site for Data Mining and Knowledge Discovery.
To subscribe to KDD Nuggets, email to kdd-request
Past Issues: 1996 Nuggets, 1995 Nuggets, 1994 Nuggets, 1993 Nuggets


Data Mining and Knowledge Discovery Nuggets 96:36, e-mailed 96-11-21

News:
* U. Fayyad, Data Mining & Knowledge Discovery journal: first issue
http://www.research.microsoft.com/research/datamine
* L. Winstone, Data Mining and Data Warehousing Survey,
http://db.cs.sfu.ca/questionnaire
* A. Van Epps, New Data Mining WebSite, http://www.rpi.edu/~vanepa2/
Publications:
* J. Friedman, Local Learning Based on Recursive Covering,
ftp://playfair.stanford.edu/pub/friedman/dart.ps.Z
and Another Approach To Polychotomous Classification,
ftp://stat.stanford.edu/pub/friedman/poly.ps.Z
Positions:
* GPS, Database Marketing Position at GTE Mobilnet, Atlanta
Meetings:
* S. Vinterbo, PKDD'97 -- Principles of Data Mining and
Knowledge Discovery, Norway, June 25-27, 1997
http://www.idt.ntnu.no/pkdd97
* M. Smyth, Hastie and Tibshirani, Regression and Classification ...
course, Cambridge, MA, Dec 9 - 10, 1996,
http://playfair.stanford.edu/~trevor/mrc.html
* M. Smyth, Hinton and Jordan, Learning Methods for Prediction ...,
course, LA, Dec 14-15, 1996
http://www.ai.mit.edu/projects/cbcl/web-pis/jordan/course/index.html
* KBCS, CFP: KBCS-96, Bombay, India, Dec 16-18,
http://konark.ncst.ernet.in/~kbcs/kbcs96.html
* D. Gordon, ICML-97 workshop proposals due Dec 4, 1996
http://cswww.vuse.vanderbilt.edu/~mlccolt/icml97/index.html

--
Discovery in Databases (KDD) community, focusing on the latest research and
applications.

Submissions are most welcome and should be emailed,
with a DESCRIPTIVE subject line (and a URL, when available) to kdd@gte.com
To subscribe, email subscribe kdd to kdd-request@gte.com.

Nuggets frequency is approximately 3 times a month.
Back issues of Nuggets, a catalog of S*i*ftware (data mining tools),
and a wealth of other information on Data Mining and Knowledge Discovery
is available at Knowledge Discovery Mine site http://info.gte.com/~kdd

-- Gregory Piatetsky-Shapiro (moderator)

********************* Official disclaimer ***********************************
* All opinions expressed herein are those of the writers (or the moderator) *
* and not necessarily of their respective employers (or GTE Laboratories) *
*****************************************************************************

~~~~~~~~~~~~ Quotable Quote ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nam et ipsa scientia potestas est. (Knowledge is Power)
--Francis Bacon

but compare it with

Real knowledge is to know the extent of one's ignorance. - Confucius

Previous  1 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: Usama Fayyad (fayyad@MICROSOFT.com)
Subject: Data Mining & knowledge Discovery Journal: contents vol 1:1
Date: Fri, 15 Nov 1996 11:03:52 -0800

ANNOUNCEMENT and CALL FOR PAPERS

Below are the contents of the first issue of the new journal: Knowledge
Discovery and Data Mining, Kluwer Academic Publishers.

The journal is accepting submissions of works from a wide variety of
fields that relate to data mining and knowledge discovery in databases
(KDD). We accept regular research contributions, survey articles,
application details papers, as well as short (2-page) application
summaries. The goal is for Data Mining and Knowledge Discovery to
become the premiere forum for publishing high quality original work
from the wide variety of fields on which KDD draws, including:
statistics, pattern recognition, database research and systems,
modelling uncertainty and decision making, neural networks, machine
learning, OLAP, data warehousing, high-performance and parallel
computing, and visualization.

The goal is to create a reference resource where researchers and
practitioners in the area can lookup and communicate relevant work
from a wide variety of fields.

The journal's homepage provides detailed call for papers, description
of the journal and its scope, and a list of the Editorial Board.
Abstracts of the articles in the firstissue and the editorial are also
on-line. The home page is maintained at:
http://www.research.microsoft.com/research/datamine

- If you are interested in submitting a paper, please visit the
homepage: http://www.research.microsoft.com/research/datamine
to look up instructions.

- if you would like a free sample issue sent to you, click on
the link in http://www.research.microsoft.com/research/datamine
and provide the address via the on-line form.

Usama Fayyad, co-Editor-in-Chief
Data Mining and Knowledge Discovery (datamine@microsoft.com)

=======================================================================

Data Mining and Knowledge Discovery
http://www.research.microsoft.com/research/datamine

CONTENTS OF: Volume 1, Issue 1
==============================
For more details, abstracts, and on-line version of Editorial, see
http://www.research.microsoft.com/research/datamine/vol1-1

===========Volume 1, Number 1, March 1997===========

EDITORIAL by Usama Fayyad

PAPERS
======

Statistical Themes and Lessons for Data Mining
Clark Glymour, David Madigan, Daryl Pregibon, Padhraic Smyth

Data Cube: A Relational Aggregation Operator Generalizing Group-by,
Cross-Tab, and Sub Totals
Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don
Reichart, Murali Venkatrao, Frank Pellow, IBM, Toronto, Hamid Pirahesh

On Bias, Variance, 0/1 - loss, and the Curse-of-Dimensionality
Jerome H. Friedman

Bayesian Networks for Data Mining
David Heckerman

BRIEF APPLICATIONS SUMMARIES:
============================

Advanced Scout: Data Mining and Knowledge Discovery in NBA data
Ed Colet, Inderpal Bhandari, Jennifer Parker, Zachary Pines, Rajiv
Pratap, Krishnakumar Ramanujam

------------------------------------------------------------------------
To get a free sample copy of the above issue, visit the web page at
http://www.research.microsoft.com/research/datamine
Those who do not have web access may send their address to Kluwer
by e-mail at: sdelman@wkap.com


Previous  2 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: Lara Winstone (winstone@cs.sfu.ca)
Subject: Data Mining and Data Warehousing Survey
Date: Thu, 14 Nov 1996 17:00:49 -0800 (PST)

Data Mining and Data Warehousing Product/System Survey

A questionnaire has been created by a group of researchers from the
Database Systems Research Laboratory at Simon Fraser University, Canada
http://db.cs.sfu.ca, who are currently studying data mining
and data warehousing technology.

We are attempting to contact researchers and developers who have made
contributions, in the form of commercial products and/or research
prototypes, within either of these areas.

The purpose of this questionnaire is to collect information regarding
existing data mining and data warehousing systems/prototypes in order
to create a comprehensive survey of the current state of the art. The
goal of this project is to publish the survey in some popular magazine
or academic journal in order to make the information therein widely
available.

An on-line version of the questionnaire can be found at:

*************************************
* http://db.cs.sfu.ca/questionnaire *
*************************************

If you or your organization has developed either a data mining or data
warehousing system, it would be greatly appreciated if you would take a
few minutes to complete this questionnaire or forward this message to
an appropriate person within your organization. In doing so, we can
collect valuable information regarding data mining and data warehousing,
and your system will also be given valuable exposure.

The survey, which will present a balanced view of the current products,
will be useful to both end-users and developers, and will be made publicly
available upon its completion. You will have the opportunity to access
both the survey and a concise web summary of the companies and research
labs we have canvassed via the internet in the next few months.

Thank you for your time and thoughtful consideration.

Sincerely,

Lara Winstone

N.B. Three options for completing this questionnaire:

1) WWW Version: Complete and submit the form found at the above URL.

2) Email Version: If you would prefer to complete a text version of
the questionnaire, please contact Lara Winstone via email at
winstone@cs.sfu.ca or download a copy from the above URL.

3) Mail/Fax Version: A downloadable postscript version can be found
at the above URL. This version can be printed, completed and then
mailed or faxed to:

Mailing Address: Ms. Lara Winstone
Database Systems Research Laboratory
School of Computing Science
Simon Fraser University
Burnaby, B.C.
Canada, V5A 1S6

Tel: (604)291-5938, (604)291-5371
Fax: (604)291-3045


Previous  3 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Wed, 13 Nov 1996 17:05:56 -0400
To: kdd@gte.com
From: vanepa@rpi.edu (Amy S. Van Epps)
Subject: New Data Mining WebSite

Announcing a new web page for Data Mining and Knowledge Discovery!

This page is being developed and maintained in conjunction with the
Rensselaer Polytechnic Institute course 'Knowledge Discovery and Data
Mining' (92.6962). It will be under construction for the course of the
semester. This site will include the list of weekly readings, homework
assignments, upcoming conferences about data mining and knowledge
discovery, information about some of the software that is available and
datasets which are available for testing data mining techniques.

URL: http://www.rpi.edu/~vanepa2/

Please visit the site and send comments or questions to Amy Van Epps,
vanepa2@rpi.edu.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Amy S. Van Epps Phone: (518) 276-8314
Engineering Librarian FAX: (518) 276-8559
Folsom Library vanepa@rpi.edu
Rensselaer Polytechnic Institute
Troy, NY 12180-3590


Previous  4 Next   Top
>~~~Publications:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Thu, 14 Nov 1996 14:49:58 -0800
From: 'Jerome H. Friedman' (jhf@playfair.Stanford.EDU)
Subject: Technical Reports Available.

LOCAL LEARNING BASED ON
RECURSIVE COVERING

Jerome H. Friedman
Stanford University
(jhf@playfair.stanford.edu)

ABSTRACT


Local learning methods approximate a global relationship between an output
(response) variable and a set of input (predictor) variables by establishing
a set of 'local' regions that collectively cover the input space, and
modeling a different (usually simple) input-output relationship in each one.
Predictions are made by using the model associated with the particular
region in which the prediction point is most centered. Two widely applied
local learning procedures are K - nearest neighbor methods, and decision tree
induction algorithms (CART, C4.5). The former induce a large number of
highly overlapping regions based only on the distribution of training input
values. By contrast, the latter (recursively) partition the input space into a
relatively small number of highly customized (disjoint) regions using the
training output values as well. Recursive covering unifies these two
approaches in an attempt to combine the strengths of both. A large number of
highly customized overlapping regions are produced based on both the
training input and output values. Moreover, the data structure representing
this cover permits rapid search for the prediction region given a set of
(future) input values.

Available by ftp from:
'ftp://playfair.stanford.edu/pub/friedman/dart.ps.Z'


ANOTHER APPROACH TO
POLYCHOTOMOUS CLASSIFICATION

Jerome H. Friedman
Stanford University
(jhf@stat.stanford.edu)

ABSTRACT

An alternative solution to the K - class (K > 2 - polychotomous) classific-
ation problem is proposed. It is a simple extension of K = 2 (dichotomous)
classification in that a separate two-class decision boundary is
independently constructed between every pair of the K classes. Each of these
boundaries is then used to assign an unknown observation to one of
its two respective classes. The individual class that receives the most
such assignments over these K(K-1)/2 decisions is taken as the predicted
class for the observation. Motivation for this approach is provided along
with discussion as to those situations where it might be expected to do
better than more traditional methods. Examples are presented illustrating
that substantial gains in accuracy can sometimes be achieved.

Available by ftp from:
'ftp://stat.stanford.edu/pub/friedman/poly.ps.Z'

Note: these postscript files do not view properly on some ghostviews.
They do seem to print properly on nearly all postscript printers.


Previous  5 Next   Top
>~~~Positions:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Thu, 21 Nov 1996 16:55:46 -0500
From: gps@gte.com (Gregory Piatetsky-Shapiro)

SUMMARY: a person with several years of both business and statistical
experience is needed by GTE Mobilnet Database Marketing Group, located in
Atlanta, GA.

Position Title: Administrator, Database Modeling and Analysis
Location: 245 Perimeter Circle Pkwy, Atlanta, GA
Salary range: $60-70,000 depending on the experience, plus relocation expenses.
Reports to: Manager, Database Marketing

1. Basic Purpose:
Responsible for conducting analysis of GTE Mobilnet's database
marketing program. This will be done by building statistical models
which identify characteristics of those who respond and those who do
not respond to the company's database marketing programs.
Additionaly, this position assume the responsibility for reporting the
learning from database marketing campaigns, and each program results.

The position will also involve interaction with GTE Laboratories
Knowledge Discovery group.


2. Principal Tasks:
* Build statistical Models which analyze the results of database
marketing programs
* Identify ways to improve targeting of database programs over time
* Report characteristics of responders and non-responders
* Define, implement, and maintain a log which states the optimum
approach to measuring and tracking the effectiveness of
each database marketing program

This position will involve analyzing and reporting the results of at
least one database marketing program per month, and
identifying ways to increase the effectiveness of program.

3. Background/Experience:
M.S. in Statistics
3 or more successful years of database marketing or similar experience
Familiarity with statistical analysis and data mining packages
Strong written and verbal communication skills
Ability to work with others up and across a large organization
Experience in a telecommunications industry desirable

Please fax or send a cover letter and a resume to:
Rob Epstein
GTE Mobilnet Marketing (3MNT)
245 Perimeter Center Parkway
Atlanta, GA 30346
tel: 770-804-3572
fax: 770-395-8706

or email (preferably in ASCII) to dhaney@mobilnet.gte.com


Previous  6 Next   Top
>~~~Meetings:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: 'Staal Vinterbo' (pkdd97@idt.ntnu.no)
Date: Mon, 18 Nov 1996 16:57:39 +0100
Subject: PKDD'97: Call for papers

PKDD'97 -- 1st European Symposium on Principles of
Data Mining and Knowledge Discovery
Trondheim, Norway
June 25-27, 1997
Preliminary call for papers

Data Mining and Knowledge Discovery (KDD) have recently emerged from a
combination of many research areas: databases, statistics, machine
learning, automated scientific discovery, inductive programming,
artificial intelligence, visualization, decision science, and high
performance computing.

While each of these areas can contribute in specific ways, KDD focuses
on the value that is added by creative combination of the contributing
areas. The goal of PKDD'97 is to provide a European-based forum for
interaction among all theoreticians and practitioners interested in
data mining. Fostering an interdisciplinary collaboration is one
desired outcome, but the main long-term focus is on theoretical
principles for the emerging discipline of KDD, especially those new
principles that go beyond each of the contributing areas.

To promote these goals, PKDD'97 will be organized into tracks around
the key areas contributing to KDD. For each area an ideal paper
should focus on how its methods advance KDD's goals and principles.

Both theoretical and applied submissions are sought. Reviewers will
assess the contribution towards the main goals of PKDD'97, in addition
to the usual requirements of novelty, clarity and
significance. Applied papers should go beyond an individual
application, presenting an explicit method that promises a degree of
generality within some stage of the discovery process, such as
preprocessing, mining, visualization, use of prior knowledge,
knowledge refinement, and evaluation. Theoretical papers should
demonstrate how they advance the process of data mining and knowledge
discovery.

The following non-exclusive list exemplifies topics of interest:
* Data and knowledge representation for data mining
* Beyond relational databases: new forms of data organization
* Data reduction
* Prior domain knowledge and use of discovered knowledge
* Combining query systems with discovery capabilities
* Statistics and probability in data mining
* Discovery of probabilistic networks
* Modeling data and knowledge uncertainty
* Discovery of exceptions and deviations
* Statistical significance in large-scale search
* The problems of over-fit
* Logic-based perspective on data mining
* Inferring knowledge from data
* Exploring different subspaces of first order logic
* Rough sets in data mining
* Fuzzy sets in data mining
* Boolean approaches to data mining
* Inductive Logic Programming for mining real databases
* Pattern-recognition for data mining
* Clustering analysis
* Tolerance (similarity) relations
* KDD-motivated discretization of data
* Man-Machine interaction in data mining
* Visualization of data
* Visualization of results
* Interface design
* Interactive data mining: human and computer contributions
* Artificial Intelligence contributions to KDD
* Representing knowledge and hypotheses spaces
* Search for knowledge and its complexities
* Combining many methods in one system
* High performance computing for data mining
* Hardware dedicated to discovery applications
* Parallel discovery algorithms and complexity
* Distributed data mining
* Scalability in high dimensional datasets
* From machine learning to KDD
* From concept learning to concept discovery
* Expanding the autonomy of machine learners
* Embedding learning methods in KDD systems
* Conceptual clustering in knowledge discovery
* From automated scientific discovery to KDD
* Applications of scientific discovery systems to databases
* Experience with hypothesis evaluation that transfers to KDD
* Hypothesis spaces of scientific discovery applied in KDD
* Differences between the data handled in both fields
* Scientific discovery techniques relevant in KDD
* Quality assessment of data mining results
* Multi-criteria knowledge evaluation
* Benchmarks and metrics for system evaluation
* Statistical tests in KDD applications
* Usefulness and risk assessment in decision-making
* Applications of data mining and knowledge discovery
* Medicine: diagnosis and prognosis
* Control theory: predictive and adaptive control, model identification
* Engineering: diagnosis of mechanisms and processes
* Public administration
* Marketing and finance
* Data mining on the web in text and heterogeneous data
* Natural and social science

Submissions are by email (preferred) to pkdd97@idt.ntnu.no or by
airmail to Jan Komorowski (see address below). Papers should be in
English and not exceed ten single-spaced pages of 12pt font. The
first page should begin with title, authors, affiliations, surface and
e-mail addresses, and an abstract of about 200 words.

Important dates -
Submission deadline: February 5th, 1997
Notice of acceptance: March 3rd
Camera ready papers: March 23rd

PANEL DISCUSSIONS: proposals are sought for panels that stimulate
interaction between the communities contributing to KDD. Include
title. prospective participants and a summary of the topics to be
discussed. Submission to zytkow@cs.twsu.edu by March 14th. Notice of
acceptance by March 21th.

POSTER SESSION: informative descriptions of successful applications of
data mining and knowledge discovery techniques in processing new data
sets may be submitted for presentation at the poster session. Send an
extended abstract, not exceeding two pages of 12pt, single spaced text
to pkdd97@idt.ntnu.no by March 14th. Notice of acceptance by March
21st.

TUTORIALS: proposals are solicited for tutorials that: (1) transfer
know-how and provide hands-on experience, (2) combine two or more
areas (e.g. rough sets and statistics, high-performance computing and
databases, etc), or (3) cover application domains such as finance,
medicine, or automatic control.
Submission to pkdd97@idt.ntnu.no by February 19th.
Notice of acceptance by March 10th.

DEMONSTRATIONS OF SOFTWARE for data mining and knowledge discovery are
invited. This includes professional and experimental systems. Send
descriptions to pkdd97@idt.ntnu.no by June 2nd.

Program co-chairs:

Jan Komorowski, Trondheim, Norway Jan Zytkow, Wichita, USA
Jan.Komorowski@idt.ntnu.no zytkow@cs.twsu.edu

Department of Computer Systems
Norwegian University of Science and Technology
7034 Trondheim, Norway

Details regarding the conference will be forthcoming. Watch the
PKDD'97 WWW page for details http://www.idt.ntnu.no/pkdd97.


Previous  7 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: Marney Smyth (marney@ai.mit.edu)
Subject: Modern Regression and Classification
Date: Sun, 10 Nov 1996 17:12:19 -0500 (EST)

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ +++
+++ Modern Regression and Classification +++
+++ Widely Applicable Statistical Methods for +++
+++ Modeling and Prediction +++
+++ +++
+++ Cambridge, MA, December 9 - 10, 1996 +++
+++ +++
+++ Trevor Hastie, Stanford University +++
+++ Rob Tibshirani, University of Toronto +++
+++ +++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


This two-day course will give a detailed overview of statistical
models for regression and classification. Known as machine-learning in
computer science and artificial intelligence, and pattern recognition
in engineering, this is a hot field with powerful applications in
science, industry and finance.

Additional information is available at the Website:

http://playfair.stanford.edu/~trevor/mrc.html

(see also previous posting in KDD Nuggets 96:31, item 7
http://info.gte.com/gtel/sponsored/kdd/nuggets/96/n31.html#item7 )

Previous  8 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: Marney Smyth (marney@ai.mit.edu)
Subject: Learning Methods for Prediction, Classification
Date: Sun, 10 Nov 1996 17:13:10 -0500 (EST)

**************************************************************
*** ***
*** Learning Methods for Prediction, Classification, ***
*** Novelty Detection and Time Series Analysis ***
*** ***
*** Los Angeles, CA, December 14-15, 1996 ***
*** ***
*** Geoffrey Hinton, University of Toronto ***
*** Michael Jordan, Massachusetts Inst. of Tech. ***
*** ***
**************************************************************


A two-day intensive Tutorial on Advanced Learning Methods will be held
on December 14 and 15, 1996, at Loews Hotel, Santa Monica, CA. Space
is available for up to 50 participants for the course.

The course will provide an in-depth discussion of the large collection
of new tools that have become available in recent years for developing
autonomous learning systems and for aiding in the analysis of complex
multivariate data. These tools include neural networks, hidden Markov
models, belief networks, decision trees, memory-based methods, as well
as increasingly sophisticated combinations of these architectures.
Applications include prediction, classification, fault detection,
time series analysis, diagnosis, optimization, system identification
and control, exploratory data analysis and many other problems in
statistics, machine learning and data mining.

The course will be devoted equally to the conceptual foundations of
recent developments in machine learning and to the deployment of these
tools in applied settings. Case studies will be described to show how
learning systems can be developed in real-world settings. Architectures
and algorithms will be presented in some detail, but with a minimum of
mathematical formalism and with a focus on intuitive understanding.
Emphasis will be placed on using machine methods as tools that can
be combined to solve the problem at hand.

WHO SHOULD ATTEND THIS COURSE?

The course is intended for engineers, data analysts, scientists,
managers and others who would like to understand the basic principles
underlying learning systems. The focus will be on neural network models
and related graphical models such as mixture models, hidden Markov
models, Kalman filters and belief networks. No previous exposure to
machine learning algorithms is necessary although a degree in engineering
or science (or equivalent experience) is desirable. Those attending
can expect to gain an understanding of the current state-of-the-art
in machine learning and be in a position to make informed decisions
about whether this technology is relevant to specific problems in
their area of interest.

COURSE OUTLINE

Overview of learning systems; LMS, perceptrons and support vectors;
generalized linear models; multilayer networks; recurrent networks;
weight decay, regularization and committees; optimization methods;
active learning; applications to prediction, classification and control

Graphical models: Markov random fields and Bayesian belief networks;
junction trees and probabilistic message passing; calculating most
probable configurations; Boltzmann machines; influence diagrams;
structure learning algorithms; applications to diagnosis, density
estimation, novelty detection and sensitivity analysis

Clustering; mixture models; mixtures of experts models; the EM
algorithm; decision trees; hidden Markov models; variations on
hidden Markov models; applications to prediction, classification
and time series modeling

Subspace methods; mixtures of principal component modules; factor
analysis and its relation to PCA; Kalman filtering; switching
mixtures of Kalman filters; tree-structured Kalman filters;
applications to novelty detection and system identification

Approximate methods: sampling methods, variational methods;
graphical models with sigmoid units and noisy-OR units; factorial
HMMs; the Helmholtz machine; computationally efficient upper
and lower bounds for graphical models

REGISTRATION

Standard Registration: $700

Student Registration: $400

Cancellation Policy: Cancellation before Friday December 6th, 1996,
incurs a penalty of $150.00. Cancellation after Friday December 6th,
1996, incurs a penalty of one-half of Registration Fee.

Registration Fee includes Course Materials, breakfast, coffee breaks,
and lunch on Saturday December 14th.

On-site Registration is possible. Payment of on-site registration must
be in US Dollar amounts, by Money Order or Check (preferably drawn on
a US Bank account).



Those interested in participating should return the completed
Registration Form and Fee as soon as possible, as the total number of
places is limited by the size of the venue.




Please print this form, and fill in the hard copy to return by mail

REGISTRATION FORM

Learning Methods for Prediction, Classification,
Novelty Detection and Time Series Analysis

Saturday, December 14 - Sunday, December 15, 1996
Santa Monica, CA, USA.
--------------------------------------

Please complete this form (type or print)

Name ___________________________________________________
Last First Middle

Firm or Institution ______________________________________



Standard Registration ____ Student Registration ____



Mailing Address (for receipt) _________________________

__________________________________________________________

__________________________________________________________

__________________________________________________________
Country Phone FAX

__________________________________________________________
email address

(Lunch Menu, Saturday December 14th - tick as appropriate):


___ Vegetarian ___ Non-Vegetarian

[Image]

Fee payment must be made by MONEY ORDER or PERSONAL CHECK. All amounts
are given in US dollar figures. Make fee payable to Prof. Michael
Jordan. Mail it, together with this completed Registration Form to:

Professor Michael Jordan
Dept. of Brain and Cognitive Sciences
M.I.T.
E10-034D
77 Massachusetts Avenue
Cambridge, MA 02139
USA




HOTEL ACCOMMODATION

Hotel accomodations are the personal responsibility of each participant.

The Tutorial will be held in

Lowes Santa Monica Beach Hotel,
1700 Ocean Avenue
Santa Monica CA 90401
(310) 458-6700 FAX (310) 458-0020

on December 14 and 15, 1996.

The hotel has reserved a block of rooms for participants of the course. The
special room rates for participants are:

U.S. $170.00 (city view) per night + tax

U.S. $250.00 (full ocean view) per night + tax

Please be aware that these prices do not include State or City taxes.
Participants may wish to avail of discounted overnight parking rate of
$13.30 (self) and $15.50 (valet).



ADDITIONAL INFORMATION

A registration form is available from the course's WWW page at

http://www.ai.mit.edu/projects/cbcl/web-pis/jordan/course/index.html

Marney Smyth
Phone: 617 258-8928
Fax: 617 258-6779
E-mail: marney@ai.mit.edu


Previous  9 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 5 Nov 1996 18:35:35 +0500 (GMT)
From: KBCS Word Processing (kbcs@konark.ncst.ernet.in)
Subject: KBCS-96 Call for Participation

Call for Participation

INTERNATIONAL CONFERENCE ON
KNOWLEDGE BASED COMPUTER SYSTEMS
St. Andrew's Auditorium, Bandra
Bombay, India
December 16-18, 1996
URL : http://konark.ncst.ernet.in/~kbcs/kbcs96.html
____________________________________________________________________________

The International Conference on Knowledge Based Computer Systems will be
held in Mumbai, India during December 16-18, 1996. The conference will act
as a forum for promoting interaction among researchers in the field of
Artificial Intelligence in India and abroad. There will be a two day
conference during December 16-17, 1996 followed by one day of
post-conference tutorials on December 18, 1996.

Papers have been submitted to the conference on the following topics. About
40 papers will be presented during the conference.

AI Applications AI Architectures
Automatic Programming Cognitive Modeling
Expert Systems Foundations of AI
Genetic Algorithms Fuzzy Systems
Information Retrieval Intelligent Tutoring Systems
Knowledge Acquisition Knowledge Representation
Machine Learning Machine Translation
Natural Language Processing Neural Networks
Planning and Scheduling Reasoning
Robotics Search Techniques
Speech Processing Theorem Proving
Uncertainty Handling User Interfaces
User Modeling Vision

Post-Conference Tutorials:

All tutorials will be conducted on December 18, 1996 at NCST, Juhu, Mumbai.

* T1: Hybrid Intelligent Systems - Full Day
Larry R Medsker, The American University, USA
* T2: Case-based Reasoning - Full Day
John A Campbell, University College London, UK
* T3: Data Mining - Half Day
R Uthurusamy, General Motors Research Labs, Detroit, USA

Registration Fees:

Conference
o Students in India : Rs 850
o Delegates from not-for-profit R&D and educational
organisations in India : Rs 1200
o Other Indian Delegates & : & Rs 1800
o Delegates from Abroad & : & US $100
Tutorials
o Half Day & : Rs 750
o Full Day & : Rs 1000

Programme Committee:

S. Arunkumar, IIT, Bombay Amitava Bagchi, IIM, Calcutta
Pushpak Bhattacharya, IIT, Bombay Margaret A. Boden, U of Sussex, UK
Nick Cercone, U of Regina, Canada B. B. Chaudhuri, ISI, Calcutta
R. Chandrasekar, NCST, Bombay S. K. Goyal, GTE Labs , USA
S. Sen Gupta, TUL, Bombay J. R. Isaac, NIIT, New Delhi
Aravind K. Joshi, R. A. Kowalski, Imperial College, UK
U of Pennsylvania, USA
H. N. Mahabala, INFOSYS, Bangalore M. Narasimha Murty, IISc, Bangalore
R. Narasimhan, CMC, Bangalore S. Ramani, NCST, Bombay (Chair)
P. V. S. Rao, TIFR, Bombay Patrick Saint-Dizier,
U of Paul Sabatier, France
R. Sangal, IIT, Kanpur R. Uthurusamy, GMR, USA
M. Vidyasagar, CAIR, Bangalore


Organizing Committee:

George Arakal, NCST (Chair) K.S.R. Anjaneyulu, NCST
P. Ravi Prakash, NCST Durgesh D. Rao, NCST
M. Sasikumar, NCST T. Suresh, NCST



For further information and the registration details please refer
to the KBCS-96 home page or write to the KBCS-96 Secretariat.

___________________________________________________________________________
Address
KBCS-96 Secretariat Phone : +91 (22) 620 1606
National Centre for Software Technology Fax : +91 (22) 621 0139
Gulmohar Cross Rd No. 9 E-mail : kbcs@konark.ncst.ernet.in
Juhu, Bombay 400 049, India
URL : http://konark.ncst.ernet.in/~kbcs/kbcs96.html

Previous  10 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: gordon@AIC.NRL.Navy.Mil
Date: Tue, 19 Nov 96 10:55:31 EST
Subject: workshop proposals

REMINDER:
The WORKSHOP PROPOSALS for ICML-97 are due by December 4, 1996,
preferably by email to gordon@aic.nrl.navy.mil.
For details about the conference and the Call for Workshop
Proposals, see the ICML-97 Web site:

http://cswww.vuse.vanderbilt.edu/~mlccolt/icml97/index.html

Diana Gordon


Previous  11 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~