# Data-Intensive Distributed Computing Data-Intensive Distributed Computing Part 6: Data Mining (1/4)

date post

24-Jul-2020Category

## Documents

view

2download

0

Embed Size (px)

### Transcript of Data-Intensive Distributed Computing Data-Intensive Distributed Computing Part 6: Data Mining (1/4)

Data-Intensive Distributed Computing

Part 6: Data Mining (1/4)

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

CS 451/651 431/631 (Winter 2018)

Jimmy Lin David R. Cheriton School of Computer Science

University of Waterloo

February 27, 2018

These slides are available at http://lintool.github.io/bigdata-2018w/

Structure of the Course

“Core” framework features and algorithm design

An al

yz in

g Te

xt

An al

yz in

g G

ra ph

s

An al

yz in

g Re

la tio

na l D

at a

Da ta

M in

in g

Descriptive vs. Predictive Analytics

Frontend

Backend

users

Frontend

Backend

users

Frontend

Backend

external APIs

“Traditional” BI tools

SQL on Hadoop

Other tools

Data Warehouse“Data Lake”

data scientists

OLTP database

ETL (Extract, Transform, and Load)

OLTP database

OLTP database

Supervised Machine Learning

The generic problem of function induction given sample instances of input and output

Classification: output draws from finite discrete labels

Regression: output is a continuous value

This is not meant to be an exhaustive treatment of machine learning!

Source: Wikipedia (Sorting)

Classification

Applications

Spam detection

Sentiment analysis

Content (e.g., topic) classification

Link prediction

Document ranking

Object recognition

And much much more!

Fraud detection

training

Model

Machine Learning Algorithm

testing/deployment

?

Supervised Machine Learning

Objects are represented in terms of features: “Dense” features: sender IP, timestamp, # of recipients, length of message, etc. “Sparse” features: contains the term “viagra” in message, contains “URGENT” in subject, etc.

Feature Representations

Applications

Spam detection

Sentiment analysis

Content (e.g., genre) classification

Link prediction

Document ranking

Object recognition

And much much more!

Fraud detection

Components of a ML Solution

Data Features Model

Optimization

(Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007)

No data like more data!

Limits of Supervised Classification?

Why is this a big data problem? Isn’t gathering labels a serious bottleneck?

Solutions Crowdsourcing

Bootstrapping, semi-supervised techniques Exploiting user behavior logs

The virtuous cycle of data-driven products

a useful service

analyze user behavior to extract insights

transform insights into action

$ (hopefully)

Google. Facebook. Twitter. Amazon. Uber.

data sciencedata products

Virtuous Product Cycle

What’s the deal with neural networks?

Data Features Model

Optimization

Supervised Binary Classification

Restrict output label to be binary Yes/No

1/0

Binary classifiers form primitive building blocks for multi-class problems…

Binary Classifiers as Building Blocks Example: four-way classification

One vs. rest classifiers

A or not?

B or not?

C or not?

D or not?

A or not?

B or not?

C or not?

D or not?

Classifier cascades

The Task

Given: D = {(xi, yi)}ni (sparse) feature vector

label

xi = [x1, x2, x3, . . . , xd]

y 2 {0, 1}

Induce: Such that loss is minimized

f : X ! Y

1

n

nX

i=0

`(f(xi), yi)

loss function

Typically, we consider functions of a parametric form:

argmin ✓

1

n

nX

i=0

`(f(xi; ✓), yi)

model parameters

Key insight: machine learning as an optimization problem! (closed form solutions generally not possible)

* caveats

Gradient Descent: Preliminaries

Rewrite: argmin

✓ L(✓)

argmin

✓

1

n

nX

i=0

`(f(xi; ✓), yi)

Compute gradient: “Points” to fastest increasing “direction”

rL(✓) = @L(✓)

@w0 , @L(✓)

@w1 , . . .

@L(✓)

@wd

�

So, at any point: b = a� �rL(a) L(a) � L(b)

*

Gradient Descent: Iterative Update

Start at an arbitrary point, iteratively update: ✓(t+1) ✓(t) � �(t)rL(✓(t))

We have: L(✓(0)) � L(✓(1)) � L(✓(2)) . . .

Old weights Update based on gradient

New weights

✓(t+1) ✓(t) � �(t) 1 n

nX

i=0

r`(f(xi; ✓(t)), yi)

`(x)

r` d

dx

`

Intuition behind the math…

Gradient Descent: Iterative Update

Start at an arbitrary point, iteratively update:

Lots of details: Figuring out the step size

Getting stuck in local minima Convergence rate

…

✓(t+1) ✓(t) � �(t)rL(✓(t))

We have: L(✓(0)) � L(✓(1)) � L(✓(2)) . . .

Repeat until convergence:

✓(t+1) ✓(t) � �(t) 1 n

nX

i=0

r`(f(xi; ✓(t)), yi)

Gradient Descent

Note, sometimes formulated as ascent but entirely equivalent

Gradient Descent

Source: Wikipedia (Hills)

✓(t+1) ✓(t) � �(t) 1 n

nX

i=0

r`(f(xi; ✓(t)), yi)

f(x+�x) = f(x) + f 0(x)�x+ 1

2 f

00(x)�x2

Even More Details…

Gradient descent is a “first order” optimization technique Often, slow convergence

Newton and quasi-Newton methods: Intuition: Taylor expansion

Requires the Hessian (square matrix of second order partial derivatives): impractical to fully compute

Source: Wikipedia (Hammer)

Logistic Regression

D = {(xi, yi)}ni xi = [x1, x2, x3, . . . , xd]

y 2 {0, 1}

ln

Pr (y = 1|x) Pr (y = 0|x)

� = w · x

ln

Pr (y = 1|x)

1� Pr (y = 1|x)

� = w · x

f(x; w) : Rd ! {0, 1}

f(x; w) =

⇢ 1 if w · x � t 0 if w · x < t

Logistic Regression: Preliminaries

Given:

Define:

Interpretation:

Relation to the Logistic Function

After some algebra:

Pr (y = 1|x) = e w·x

1 + ew·x

Pr (y = 0|x) = 1 1 + ew·x

The logistic function:

f(z) = ez

ez + 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8

lo gi

st ic

(z )

z

argmax

w

nY

i=1

Pr(yi|xi,w)

L(w) = nX

i=1

ln Pr(yi|xi,w)

y 2 {0, 1}

L(w) = nX

i=1

⇣ yi ln Pr(yi = 1|xi,w) + (1� yi) lnPr(yi = 0|xi,w)

⌘

Pr(y|x,w) = Pr(y = 1|x,w)yPr(y = 0|x,w)(1�y)

Training an LR Classifier

Maximize the conditional likelihood:

Define the objective in terms of conditional log likelihood:

We know:

So:

Substituting:

L(w) = nX

i=1

⇣ yi ln Pr(yi = 1|xi,w) + (1� yi) lnPr(yi = 0|xi,w)

⌘

@

@w L(w) =

nX

i=0

xi

⇣ yi � Pr(yi = 1|xi,w)

⌘

w(t+1) w(t) + �(t)rwL(w(t))

rL(w) = @L(w)

@w0 , @L(w)

@w1 , . . .

@L(w)

@wd

�

w

(t+1) i w

(t) i + �

(t) nX

j=0

xj,i

⇣ yj � Pr(yj = 1|xj ,w(t))

⌘

LR Classifier Update Rule Take the derivative:

General form of update rule:

Final update rule:

Want more details? Take a real machine-learning course!

Lots more details…

Regularization Different loss functions

…

mapper mapper mapper mapper

reducer

compute partial gradient

single reducer

mappers

update model iterate until convergence

✓(t+1) ✓(t) � �(t) 1 n

nX

i=0

r`(f(xi; ✓(t)), yi)

MapReduce Implementation

Shortcomings

Hadoop is bad at iterative algorithms High job startup costs

Awkward to retain state across iterations

High sensitivity to skew Iteration speed bounded by slowest task

Potentially poor cluster utilization Must shuffle all data to a single reducer

Some possible tradeoffs Number of iterations vs. complexity of computation per iteration

E.g., L-BFGS: faster convergence, but more to compute

*View more*