Bài giảng Tối ưu hóa nâng cao - Bài 5: Gradient Descent - Hoàng Nam Dũng

Outline

I How to choose step sizes

I Convergence analysis

I Nonconvex functions

I Gradient boosting

pdf 31 trang phuongnguyen 2600
Bạn đang xem 20 trang mẫu của tài liệu "Bài giảng Tối ưu hóa nâng cao - Bài 5: Gradient Descent - Hoàng Nam Dũng", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Bài giảng Tối ưu hóa nâng cao - Bài 5: Gradient Descent - Hoàng Nam Dũng

Bài giảng Tối ưu hóa nâng cao - Bài 5: Gradient Descent - Hoàng Nam Dũng
Gradient Descent
Hoàng Nam Dũng
Khoa Toán - Cơ - Tin học, Đại học Khoa học Tự nhiên, Đại học Quốc gia Hà Nội
Gradient descent
Consider unconstrained, smooth convex optimization
min
x
f (x)
with convex and differentiable function f : Rn → R. Denote the optimal
value by f ∗ = minx f (x) and a solution by x∗.
Gradient descent: choose initial point x (0) ∈ Rn, repeat:
x (k) = x (k−1) − tk · ∇f (x (k−1)), k = 1, 2, 3, . . .
Stop at some point.
1
Gradient descent
Consider unconstrained, smooth convex optimization
min
x
f (x)
with convex and differentiable function f : Rn → R. Denote the optimal
value by f ∗ = minx f (x) and a solution by x∗.
Gradient descent: choose initial point x (0) ∈ Rn, repeat:
x (k) = x (k−1) − tk · ∇f (x (k−1)), k = 1, 2, 3, . . .
Stop at some point.
1
ll
l
l
l
4 2
ll
l
l
l
5 3
Gradient descent interpretation
At each iteration, consider the expansion
f (y) ≈ f (x) +∇f (x)T (y − x) + 1
2t
‖y − x‖22 .
Quadratic approximation, replacing usual Hessian ∇2f (x) by 1t I .
f (x) +∇f (x)T (y − x) linear approximation to f
1
2t ‖y − x‖22 proximity term to x , with weight 1/2t
Choose next point y = x+ to minimize quadratic approximation
x+ = x − t∇f (x).
4
Gradient descent interpretation
l
l
Blue point is x, red point is
x+ = argmin
y
f(x) +∇f(x)T (y − x) + 1
2t
‖y − x‖22
7
Blue point is x , red point is
x∗ = argminy f (x) +∇f (x)T (y − x) + 12t ‖y − x‖22
5
Outline
I How to choose step sizes
I Convergence analysis
I Nonconvex functions
I Gradient boosting
6
Fixed step size
Simply take tk = t for all k = 1, 2, 3, . . ., can diverge if t is too big.
Consider f (x) = (10x21 + x
2
2 )/2, gradient descent after 8 steps:
Fixed step size
Simply take tk = t for all k = 1, 2, 3, . . ., can diverge if t is too big.
Consider f(x) = (10x21 + x
2
2)/2, gradient descent after 8 steps:
−20 −10 0 10 20
−
20
−
10
0
10
20 l
l
l
*
9
7
Fixed step size
Can be slow if t is too small. Same example, gradient descent after 100
steps:
Can be slow if t is too small. Same example, gradient descent after
100 steps:
−20 −10 0 10 20
−
20
−
10
0
10
20 llllllllllllllllllllllllllllllllll
*
10
8
Fixed step size
Converges nicely when t is “just right”. Same example, 40 steps:Conver ly when t is “just right”. Same example, 40 steps:
−20 −10 0 10 20
−
20
−
10
0
10
20 l
l
l
l
l
l
l
l
lllllllllllllllll
*
Convergence analysis later will give us a precise idea of “just right”
11
Convergence analysis later will give us a precise idea of “just right”.
9
Backtracking line search
One way to adaptively choose the step size is to use backtracking line
search:
I First fix parameters 0 < β < 1 and 0 < α ≤ 1/2.
I At each iteration, start with t = tinit, and while
f (x − t∇f (x)) > f (x)− αt ‖∇f (x)‖22
shrink t = βt. Else perform gradient descent update
x+ = x − t∇f (x).
Simple and tends to work well in practice (further simplification: just take
α = 1/2).
10
Backtracking interpretationBacktracking interpretation9.2 Descent methods 465
t
f(x+ t∆x)
t = 0 t0
f(x) + αt∇f(x)T∆xf(x) + t∇f(x)T∆x
Figure 9.1 Backtracking line search. The curve shows f , restricted to the line
over which we search. The lower dashed line shows the linear extrapolation
of f , and the upper dashed line has a slope a factor of α smaller. The
backtracking condition is that f lies below the upper dashed line, i.e., 0 ≤
t ≤ t0.
The line search is called backtracking because it starts with unit step size and
then reduces it by the factor β until the stopping condition f(x + t∆x) ≤ f(x) +
αt∇f(x)T∆x holds. Since ∆x is a descent direction, we have ∇f(x)T∆x < 0, so
for small enough t we have
f(x+ t∆x) ≈ f(x) + t∇f(x)T∆x < f(x) + αt∇f(x)T∆x,
which shows that the backtracking line search eventually terminates. The constant
α can be interpreted as the fraction of the decrease in f predicted by linear extrap-
olation that we will accept. (The reason for requiring α to be smaller than 0.5 will
become clear later.)
The backtracking condition is illustrated in figure 9.1. This figure suggests,
and it can be shown, that the backtracking exit inequality f(x + t∆x) ≤ f(x) +
αt∇f(x)T∆x holds for t ≥ 0 in an interval (0, t0]. It follows that the backtracking
line search stops with a step length t that satisfies
t = 1, or t ∈ (βt0, t0].
The first case occurs when the step length t = 1 satisfies the backtracking condition,
i.e., 1 ≤ t0. In particular, we can say that the step length obtained by backtracking
line search satisfies
t ≥ min{1,βt0}.
When dom f is not all of Rn, the condition f(x+ t∆x) ≤ f(x)+αt∇f(x)T∆x
in the backtracking line search must be interpreted carefully. By our convention
that f is infinite outside its domain, the inequality implies that x+ t∆x ∈ dom f .
In a practical implementation, we first multiply t by β until x + t∆x ∈ dom f ;
For us ∆x = −∇f(x)
13
For us ∆x = −∇f (x)
11
Backtracking line search
Setting α = β = 0.5, backtracking picks up roughly the right step size
(12 outer steps, 40 steps total).
Setting α = β = 0.5, backtracking picks up roughly the right step
size (12 outer steps, 40 steps total),
−20 −10 0 10 20
−
20
−
10
0
10
20 l
l
ll
l
lll
ll
*
14
12
Exact line search
We could also choose step to do the best we can along direction of
negative gradient, called exact line search:
t = argmins≥0 f (x − s∇f (x)).
Usually not possible to do this minimization exactly.
Approximations to exact line search are typically not as efficient as
backtracking and it’s typically not worth it.
13
Convergence analysis
Assume that f : Rn → R convex and differentiable and additionally
‖∇f (x)−∇f (y)‖2 ≤ L ‖x − y‖2 for any x , y
i.e., ∇f is Lipschitz continuous with constant L > 0.
Theorem
Gradient descent with fixed step size t ≤ 1/L satisfies
f (x (k))− f ∗ ≤ 1
2tk
‖x (0) − x∗‖22
and same result holds for backtracking with t replaced by β/L.
We say gradient descent has convergence rate O(1/k), i.e., it finds
ε-suboptimal point in O(1/ε) iterations.
Chứng minh.
Slide 20-25 in 
lectures/gradient.pdf
14
Convergence analysis
Assume that f : Rn → R convex and differentiable and additionally
‖∇f (x)−∇f (y)‖2 ≤ L ‖x − y‖2 for any x , y
i.e., ∇f is Lipschitz continuous with constant L > 0.
Theorem
Gradient descent with fixed step size t ≤ 1/L satisfies
f (x (k))− f ∗ ≤ 1
2tk
‖x (0) − x∗‖22
and same result holds for backtracking with t replaced by β/L.
We say gradient descent has convergence rate O(1/k), i.e., it finds
ε-suboptimal point in O(1/ε) iterations.
Chứng minh.
Slide 20-25 in 
lectures/gradient.pdf
14
Convergence analysis
Assume that f : Rn → R convex and differentiable and additionally
‖∇f (x)−∇f (y)‖2 ≤ L ‖x − y‖2 for any x , y
i.e., ∇f is Lipschitz continuous with constant L > 0.
Theorem
Gradient descent with fixed step size t ≤ 1/L satisfies
f (x (k))− f ∗ ≤ 1
2tk
‖x (0) − x∗‖22
and same result holds for backtracking with t replaced by β/L.
We say gradient descent has convergence rate O(1/k), i.e., it finds
ε-suboptimal point in O(1/ε) iterations.
Chứng minh.
Slide 20-25 in 
lectures/gradient.pdf
14
Convergence under strong convexity
Reminder: strong convexity of f means f (x)− m2 ‖x‖22 is convex for some
m > 0.
Assuming Lipschitz gradient as before and also strong convexity:
Theorem
Gradient descent with fixed step size t ≤ 2/(m + L) or with backtracking
line search search satisfies
f (x (k))− f ∗ ≤ ck L
2
‖x (0) − x∗‖22
where 0 < c < 1.
Rate under strong convexity is O(ck), exponentially fast, i.e., we find
ε-suboptimal point in O(log(1/ε)) iterations.
Chứng minh.
Slide 26-27 in 
lectures/gradient.pdf
15
Convergence under strong convexity
Reminder: strong convexity of f means f (x)− m2 ‖x‖22 is convex for some
m > 0.
Assuming Lipschitz gradient as before and also strong convexity:
Theorem
Gradient descent with fixed step size t ≤ 2/(m + L) or with backtracking
line search search satisfies
f (x (k))− f ∗ ≤ ck L
2
‖x (0) − x∗‖22
where 0 < c < 1.
Rate under strong convexity is O(ck), exponentially fast, i.e., we find
ε-suboptimal point in O(log(1/ε)) iterations.
Chứng minh.
Slide 26-27 in 
lectures/gradient.pdf
15
Convergence under strong convexity
Reminder: strong convexity of f means f (x)− m2 ‖x‖22 is convex for some
m > 0.
Assuming Lipschitz gradient as before and also strong convexity:
Theorem
Gradient descent with fixed step size t ≤ 2/(m + L) or with backtracking
line search search satisfies
f (x (k))− f ∗ ≤ ck L
2
‖x (0) − x∗‖22
where 0 < c < 1.
Rate under strong convexity is O(ck), exponentially fast, i.e., we find
ε-suboptimal point in O(log(1/ε)) iterations.
Chứng minh.
Slide 26-27 in 
lectures/gradient.pdf
15
Convergence rate
Called linear convergence,
because looks linear on a
semi-log plot.
Called linear convergence,
because looks linear on a
semi-log plot
9.3 Gradient descent method 473
k
f
(x
(k
)
)
−
p
⋆
exact l.s.
backtracking l.s.
0 50 100 150 200
10−4
10−2
100
102
104
Figure 9.6 Error f(x(k))−p⋆ versus iteration k for the gradient method with
backtracking and exact line search, for a problem in R100.
These experiments suggest that the effect of the backtracking parameters on the
convergence is not large, no more than a factor of two or so.
Gradient method and condition number
Our last experiment will illustrate the importance of the condition number of
∇2f(x) (or the sublevel sets) on the rate of convergence of the gradient method.
We start with the function given by (9.21), but replace the variable x by x = T x¯,
where
T = diag((1, γ1/n, γ2/n, . . . , γ(n−1)/n)),
i.e., we minimize
f¯(x¯) = cTT x¯−
m∑
i=1
log(bi − aTi T x¯). (9.22)
This gives us a family of optimization problems, indexed by γ, which affects the
problem condition number.
Figure 9.7 shows the number of iterations required to achieve f¯(x¯(k))−p¯⋆ < 10−5
as a function of γ, using a backtracking line search with α = 0.3 and β = 0.7. This
plot shows that for diagonal scaling as small as 10 : 1 (i.e., γ = 10), the number of
iterations grows to more than a thousand; for a diagonal scaling of 20 or more, the
gradient method slows to essentially useless.
The condition number of the Hessian ∇2f¯(x¯⋆) at the optimum is shown in
figure 9.8. For large and small γ, the condition number increases roughly as
max{γ2, 1/γ2}, in a very similar way as the number of iterations depends on γ.
This shows again that the relation between conditioning and convergence speed is
a real phenomenon, and not just an artifact of our analysis.
(From B & V page 487)
Important note: contraction factor c in rate depends adversely on
condition number L/m: higher condition number ⇒ slower rate
Affects not only our upper bound ... very apparent in practice too
18
(From B & V page 487)
Important note: contraction factor c in rate depends adversely on
condition number L/m: higher co dition number ⇒ slower rate.
Affects not only our upper bo nd. . . very apparent in practice too.
16
A look at the conditions
A look at the conditions for a simple problem, f (β) = 12 ‖y − Xβ‖22.
Lipschitz continuity of ∇f :
I This mean ∇2f (x)  LI .
I As ∇2f (β) = XTX , we have L = σmax(XTX ).
Strong convexity of f :
I This mean ∇2f (x)  mI .
I As ∇2f (β) = XTX , we have m = σmin(XTX ).
I If X is wide (i.e., X is n × p with p > n), then σmin(XTX ) = 0, and
f can’t be strongly convex.
I Even if σmin(XTX ) > 0, can have a very large condition number
L/m = σmax(X
TX )/σmin(X
TX ).
17
Practicalities
Stopping rule: stop when ‖∇f (x)‖2 is small
I Recall ∇f (x∗) = 0 at solution x∗
I If f is strongly convex with parameter m, then
‖∇f (x)‖2 ≤
√
2mε =⇒ f (x)− f ∗ ≤ ε.
Pros and cons of gradient descent:
I Pro: simple idea, and each iteration is cheap (usually)
I Pro: fast for well-conditioned, strongly convex problems
I Con: can often be slow, because many interesting problems aren’t
strongly convex or well-conditioned
I Con: can’t handle nondifferentiable functions.
18
Practicalities
Stopping rule: stop when ‖∇f (x)‖2 is small
I Recall ∇f (x∗) = 0 at solution x∗
I If f is strongly convex with parameter m, then
‖∇f (x)‖2 ≤
√
2mε =⇒ f (x)− f ∗ ≤ ε.
Pros and cons of gradient descent:
I Pro: simple idea, and each iteration is cheap (usually)
I Pro: fast for well-conditioned, strongly convex problems
I Con: can often be slow, because many interesting problems aren’t
strongly convex or well-conditioned
I Con: can’t handle nondifferentiable functions.
18
Can we do better?
Gradient descent has O(1/ε) convergence rate over problem class of
convex, differentiable functions with Lipschitz gradients.
First-order method: iterative method, which updates x (k) in
x (0) + span{∇f (x (0)),∇f (x (1)), . . . ,∇f (x (k−1))}.
Theorem (Nesterov)
For any k ≤ (n − 1)/2 and any starting point x (0), there is a function f
in the problem class such that any first-order method satisfies
f (x (k))− f ∗ ≥ 3L
∥∥x (0) − x∗∥∥2
2
32(k + 1)2
.
Can attain rate O(1/k2), or O(1/
√
ε)? Answer: yes (we’ll see)!
19
Can we do better?
Gradient descent has O(1/ε) convergence rate over problem class of
convex, differentiable functions with Lipschitz gradients.
First-order method: iterative method, which updates x (k) in
x (0) + span{∇f (x (0)),∇f (x (1)), . . . ,∇f (x (k−1))}.
Theorem (Nesterov)
For any k ≤ (n − 1)/2 and any starting point x (0), there is a function f
in the problem class such that any first-order method satisfies
f (x (k))− f ∗ ≥ 3L
∥∥x (0) − x∗∥∥2
2
32(k + 1)2
.
Can attain rate O(1/k2), or O(1/
√
ε)? Answer: yes (we’ll see)!
19
What about nonconvex functions?
Assume f is differentiable with Lipschitz gradient as before, but now
nonconvex. Asking for optimality is too much. So we’ll settle for x such
that ‖∇f (x)‖2 ≤ ε, called ε-stationarity.
Theorem
Gradient descent with fixed step size t ≤ 1/L satisfies
min
i=0,...,k
‖∇f (x (i))‖2 ≤
√
2(f (x0)− f ∗)
t(k + 1)
.
Thus gradient descent has rate O(1/
√
k), or O(1/ε2), even in the
nonconvex case for finding stationary points.
This rate cannot be improved (over class of differentiable functions with
Lipschitz gradients) by any deterministic algorithm1.
1Carmon et al. (2017), “Lower bounds for finding stationary points I”
20
What about nonconvex functions?
Assume f is differentiable with Lipschitz gradient as before, but now
nonconvex. Asking for optimality is too much. So we’ll settle for x such
that ‖∇f (x)‖2 ≤ ε, called ε-stationarity.
Theorem
Gradient descent with fixed step size t ≤ 1/L satisfies
min
i=0,...,k
‖∇f (x (i))‖2 ≤
√
2(f (x0)− f ∗)
t(k + 1)
.
Thus gradient descent has rate O(1/
√
k), or O(1/ε2), even in the
nonconvex case for finding stationary points.
This rate cannot be improved (over class of differentiable functions with
Lipschitz gradients) by any deterministic algorithm1.
1Carmon et al. (2017), “Lower bounds for finding stationary points I”
20
Proof
Key steps:
I ∇f Lipschitz with constant L means
f (y) ≤ f (x) +∇f (x)T (y − x) + L
2
‖y − x‖22 , ∀x , y .
I Plugging in y = x+ = x − t∇f (x),
f (x+) ≤ f (x)−
(
1− Lt
2
)
t ‖∇f (x)‖22 .
I Taking 0 < t ≤ 1/L, and rearranging,
‖∇f (x)‖22 ≤
2
t
(f (x)− f (x+)).
I Summing over iterations
k∑
i=0
∥∥∥∇f (x (i))∥∥∥2
2
≤ 2
t
(f (x (0))− f (x (k+1))) ≤ 2
t
(f (x (0))− f ∗).
I Lower bound sum by (k + 1)mini=0,1,...
∥∥∇f (x (i))∥∥2
2
, conclude.
21
References and further reading
S. Boyd and L. Vandenberghe (2004), Convex optimization, Chapter
9
T. Hastie, R. Tibshirani and J. Friedman (2009), The elements of
statistical learning, Chapters 10 and 16
Y. Nesterov (1998), Introductory lectures on convex optimization: a
basic course, Chapter 1
L. Vandenberghe, Lecture notes for EE 236C, UCLA, Spring
2011-2012
22

File đính kèm:

  • pdfbai_giang_toi_uu_hoa_nang_cao_bai_5_gradient_descent_hoang_n.pdf