Bài giảng Tối ưu hóa nâng cao - Bài 9: Stochastic Gradient Descent - Hoàng Nam Dũng

Outline

Today:

I Stochastic gradient descent

I Convergence rates

I Mini-batches

I Early stopping

pdf 24 trang phuongnguyen 1980
Bạn đang xem 20 trang mẫu của tài liệu "Bài giảng Tối ưu hóa nâng cao - Bài 9: Stochastic Gradient Descent - Hoàng Nam Dũng", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Bài giảng Tối ưu hóa nâng cao - Bài 9: Stochastic Gradient Descent - Hoàng Nam Dũng

Bài giảng Tối ưu hóa nâng cao - Bài 9: Stochastic Gradient Descent - Hoàng Nam Dũng
Stochastic Gradient Descent
Hoàng Nam Dũng
Khoa Toán - Cơ - Tin học, Đại học Khoa học Tự nhiên, Đại học Quốc gia Hà Nội
Last time: proximal gradient descent
Consider the problem
min
x
g(x) + h(x)
with g , h convex, g differentiable, and h “simple” in so much as
proxt(x) = argminz
1
2t
‖x − z‖22 + h(z)
is computable.
Proximal gradient descent: let x (0) ∈ Rn, repeat:
x (k) = proxtk (x
(k−1) − tk∇g(x (k−1))), k = 1, 2, 3, . . .
Step sizes tk chosen to be fixed and small, or via backtracking.
If ∇g is Lipschitz with constant L, then this has convergence rate
O(1/ε). Lastly we can accelerate this, to optimal rate O(1/
√
ε).
1
Outline
Today:
I Stochastic gradient descent
I Convergence rates
I Mini-batches
I Early stopping
2
Stochastic gradient descent
Consider minimizing an average of functions
min
x
1
m
m∑
i=1
fi (x).
As ∇∑mi=1 fi (x) =∑mi=1∇fi (x), gradient descent would repeat
x (k) = x (k−1) − tk · 1
m
m∑
i=1
∇fi (x (k−1)), k = 1, 2, 3, . . .
In comparison, stochastic gradient descent or SGD (or incremental
gradient descent) repeats:
x (k) = x (k−1) − tk · ∇fik (x (k−1)), k = 1, 2, 3, . . .
where ik ∈ {1, . . . ,m} is some chosen index at iteration k .
3
Stochastic gradient descent
Two rules for choosing index ik at iteration k :
I Randomized rule: choose ik ∈ {1, ...m} uniformly at random.
I Cyclic rule: choose ik = 1, 2, . . . ,m, 1, 2, . . . ,m, . . .
Randomized rule is more common in practice. For randomized rule,
note that
E[∇fik (x)] = ∇f (x),
so we can view SGD as using an unbiased estimate of the gradient
at each step.
Main appeal of SGD:
I Iteration cost is independent of m (number of functions).
I Can also be a big savings in terms of memory usage.
4
Example: stochastic logistic regression
Given (xi , yi ) ∈ Rp × {0, 1}, i = 1, . . . , n, recall logistic regression
min
β
f (β) =
1
n
n∑
i=1
(
−yixTi β + log(1+ exp(xTi β))
)
︸ ︷︷ ︸
fi (β)
.
Gradient computation ∇f (β) = 1n
∑n
i=1(yi − pi (β))xi is doable
when n is moderate, but not when n is huge.
Full gradient (also called batch) versus stochastic gradient:
I One batch update costs O(np).
I One stochastic update costs O(p).
Clearly, e.g., 10K stochastic steps are much more affordable.
5
Batch vs. stochastic gradient descent
Small example with n = 10, p = 2 to show the “classic picture” for
batch versus stochastic methods:
Small example with n = 10, p = 2 to show the “classic picture” for
batch versus stochastic methods:
−20 −10 0 10 20
−
20
−
10
0
10
20
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
ll
lll
ll
ll
lll
ll
l
llll
lll
ll
l
ll
ll
ll
l
l
lll
ll
l
l l
llllll
l*
l
l
Batch
Random
Blue: batch steps, O(np)
Red: stochastic steps, O(p)
Rule of thumb for stochastic
methods:
• generally thrive far
from optimum
• generally struggle close
to optimum
7
Blue: batch steps, O(np)
Red: stochastic steps, O(p)
Rule of thumb for stochastic
methods:
I generally thrive far from
optimum
I generally struggle close
to optimum
6
Step sizes
Standard in SGD is to use diminishing step sizes, e.g., tk = 1/k ,
for k = 1, 2, 3, . . .
Why not fixed step sizes? Here’s some intuition.
Suppose we take cyclic rule for simplicity. Set tk = t for m updates
in a row, we get
x (k+m) = x (k) − t
m∑
i=1
∇fi (x (k+i−1)).
Meanwhile, full gradient with step size t would give
x (k+1) = x (k) − t
m∑
i=1
∇fi (x (k)).
The difference here: t
∑m
i=1[∇fi (x (k+i−1))−∇fi (x (k))], and if we
hold t constant, this difference will not generally be going to zero.
7
Convergence rates
Recall: for convex f , (sub)gradient descent with diminishing step
sizes satisfies
f (x (k))− f ∗ = O(1/
√
k).
When f is differentiable with Lipschitz gradient, there holds for
gradient descent with suitable fixed step sizes
f (x (k))− f ∗ = O(1/k).
What about SGD? For convex f , SGD with diminishing step sizes
satisfies1
E[f (x (k))]− f ∗ = O(1/
√
k).
Unfortunately this does not improve when we further assume f has
Lipschitz gradient.
1E.g., Nemirosvki et al. (2009), “Robust stochastic optimization approach to
stochastic programming”
8
Convergence rates
Even worse is the following discrepancy!
When f is strongly convex and has a Lipschitz gradient, gradient
descent satisfies
f (x (k))− f ∗ = O(ck)
where c < 1. But under same conditions, SGD gives us2
E[f (x (k))]− f ∗ = O(1/k).
So stochastic methods do not enjoy the linear convergence rate of
gradient descent under strong convexity.
What can we do to improve SGD?
2E.g., Nemirosvki et al. (2009), “Robust stochastic optimization approach to
stochastic programming”
9
Mini-batches
Also common is mini-batch stochastic gradient descent, where we
choose a random subset Ik ⊆ {1, . . . ,m} of size |Ik | = b  m and
repeat
x (k) = x (k−1) − tk · 1
b
∑
i∈Ik
∇fi (x (k−1)), k = 1, 2, 3, . . .
Again, we are approximating full gradient by an unbiased estimate
E
1
b
∑
i∈Ik
∇fi (x)
 = ∇f (x).
Using mini-batches reduces the variance of our gradient estimate
by a factor 1/b, but is also b times more expensive.
10
Batch vs. mini-batches vs. stochastic
Back to logistic regression, let’s now consider a regularized version:
min
β∈Rp
1
n
n∑
i=1
(
−yixTi β + log(1+ ex
T
i β)
)
+
λ
2
‖β‖22.
Write the criterion as
f (β) =
1
n
n∑
i=1
fi (β), fi (β) = −yixTi β + log(1+ ex
T
i β) +
λ
2
‖β‖22.
Full gradient computation is ∇f (β) = 1n
∑n
i=1(yi − pi (β))xi + λβ.
Comparison between methods:
I One batch update costs O(np).
I One mini-batch update costs O(bp).
I One stochastic update costs O(p).
11
Batch vs. mini-batches vs. stochastic
Example with n = 10, 000, p = 20, all methods use fixed step sizes
Example with n = 10, 000, p = 20, all methods use fixed step sizes:
0 10 20 30 40 50
0.
50
0.
55
0.
60
0.
65
Iteration number k
Cr
ite
rio
n 
fk
Full
Stochastic
Mini−batch, b=10
Mini−batch, b=100
13
12
Batch vs. mini-batches vs. stochastic
What’s happening? Now let’s parametrize by flops3What’s happe ing? Now let’s etrize by flops:
1e+02 1e+04 1e+06
0.
50
0.
55
0.
60
0.
65
Flop count
Cr
ite
rio
n 
fk
Full
Stochastic
Mini−batch, b=10
Mini−batch, b=100
14
3flops: floating point operations per second 13
Batch vs. mini-batches vs. stochastic
Finally, looking at suboptimality gap (on log scale)
Finally, looking at suboptimality gap (on log scale):
0 10 20 30 40 50
1e
−1
2
1e
−0
9
1e
−0
6
1e
−0
3
Iteration number k
Cr
ite
rio
n 
ga
p 
fk
−f
st
ar
Full
Stochastic
Mini−batch, b=10
Mini−batch, b=100
15
14
End of the story?
Short story:
I SGD can be super effective in terms of iteration cost, memory.
I But SGD is slow to converge, can’t adapt to strong convexity.
I And mini-batches seem to be a wash in terms of flops (though
they can still be useful in practice).
Is this the end of the story for SGD?
For a while, the answer was believed to be yes. Slow convergence
for strongly convex functions was believed inevitable, as Nemirovski
and others established matching lower bounds ... but this was for a
more general stochastic problem, where f (x) =
∫
F (x , ζ)dP(ζ).
New wave of “variance reduction” work shows we can modify SGD
to converge much faster for finite sums (more later?).
15
End of the story?
Short story:
I SGD can be super effective in terms of iteration cost, memory.
I But SGD is slow to converge, can’t adapt to strong convexity.
I And mini-batches seem to be a wash in terms of flops (though
they can still be useful in practice).
Is this the end of the story for SGD?
For a while, the answer was believed to be yes. Slow convergence
for strongly convex functions was believed inevitable, as Nemirovski
and others established matching lower bounds ... but this was for a
more general stochastic problem, where f (x) =
∫
F (x , ζ)dP(ζ).
New wave of “variance reduction” work shows we can modify SGD
to converge much faster for finite sums (more later?).
15
End of the story?
Short story:
I SGD can be super effective in terms of iteration cost, memory.
I But SGD is slow to converge, can’t adapt to strong convexity.
I And mini-batches seem to be a wash in terms of flops (though
they can still be useful in practice).
Is this the end of the story for SGD?
For a while, the answer was believed to be yes. Slow convergence
for strongly convex functions was believed inevitable, as Nemirovski
and others established matching lower bounds ... but this was for a
more general stochastic problem, where f (x) =
∫
F (x , ζ)dP(ζ).
New wave of “variance reduction” work shows we can modify SGD
to converge much faster for finite sums (more later?).
15
SGD in large-scale ML
SGD has really taken off in large-scale machine learning
I In many ML problems we don’t care about optimizing to high
accuracy, it doesn’t pay off in terms of statistical performance.
I Thus (in contrast to what classic theory says) fixed step sizes
are commonly used in ML applications.
I One trick is to experiment with step sizes using small fraction
of training before running SGD on full data set ... many other
heuristics are common4.
I Many variants provide better practical stability, convergence:
momentum, acceleration, averaging, coordinate-adapted step
sizes, variance reduction ...
I See AdaGrad, Adam, AdaMax, SVRG, SAG, SAGA ... (more
later?).
4E.g., Bottou (2012), “Stochastic gradient descent tricks” 16
Early stopping
Suppose p is large and we wanted to fit (say) a logistic regression
model to data (xi , yi ) ∈ Rp × {0, 1}, i = 1, . . . , n.
We could solve (say) `2 regularized logistic regression
min
β∈Rp
1
n
n∑
i=1
(
−yixTi β + log(1+ ex
T
i β)
)
subject to ‖β‖2 ≤ t
We could also run gradient descent on the unregularized problem
min
β∈Rp
1
n
n∑
i=1
(
−yixTi β + log(1+ ex
T
i β)
)
and stop early, i.e., terminate gradient descent well-short of the
global minimum.
17
Early stopping
Consider the following, for a very small constant step size ε:
I Start at β(0) = 0, solution to regularized problem at t = 0.
I Perform gradient descent on unregularized criterion
β(k) = β(k−1) − ε · 1
n
n∑
i=1
(yi − pi (β(k−1)))xi , k = 1, 2, 3, . . .
(we could equally well consider SGD).
I Treat β(k) as an approximate solution to regularized problem
with t = ‖β(k)‖2.
This is called early stopping for gradient descent. Why would we
ever do this? It’s both more convenient and potentially much more
efficient than using explicit regularization.
18
An intriguing connection
When we solve the `2 regularized logistic problem for varying t,
solution path looks quite similar to gradient descent path!
Example with p = 8, solution and grad descent paths side by side
An intruiging connection
When we solve the `2 regularized logistic problem for varying t ...
solution path looks quite similar to gradient descent path!
Example with p = 8, solution and grad descent paths side by side:
0.0 0.5 1.0 1.5
−
0.
4
−
0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
Ridge logistic path
Co
or
di
na
te
s
g(βˆ(t))
0.0 0.5 1.0 1.5
−
0.
4
−
0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
Stagewise path
g(β(k))
C
o
or
d
in
at
es
20
19
Lots left to explore
I Connection holds beyond logistic regression, for arbitrary loss.
I In general, the grad descent path will not coincide with the `2
regularized path (as ε→ 0). Though in practice, it seems to
give competitive statistical performance.
I Can extend early stopping idea to mimick a generic regularizer
(beyond `2)
5.
I There is a lot of literature on early stopping, but it’s still not
as well-understood as it should be.
I Early stopping is just one instance of implicit or algorithmic
regularization, many others are effective in large-scale ML,
they all should be better understood
5Tibshirani (2015), “A general framework for fast stagewise algorithms”
20
References and further reading
D. Bertsekas (2010), Incremental gradient, subgradient, and
proximal methods for convex optimization: a survey
A. Nemirovski and A. Juditsky and G. Lan and A. Shapiro
(2009), Robust stochastic optimization approach to stochastic
programming
R. Tibshirani (2015), A general framework for fast stagewise
algorithms
21

File đính kèm:

  • pdfbai_giang_toi_uu_hoa_nang_cao_bai_10_stochastic_gradient_des.pdf