郑州大学学报(理学版)  2025, Vol. 57 Issue (4): 55-62  DOI: 10.13705/j.issn.1671-6841.2023199

引用本文  

郎璇聪, 王梅, 刘勇, 等. “可预测的”非凸在线点对学习的遗憾界[J]. 郑州大学学报(理学版), 2025, 57(4): 55-62.
LANG Xuancong, WANG Mei, LIU Yong, et al. Regret Bounds for Online Pairwise Learning with Predictable Non-convex Loss Functions[J]. Journal of Zhengzhou University(Natural Science Edition), 2025, 57(4): 55-62.

基金项目

国家自然科学基金项目(51774090, 62076234);黑龙江省博士后科研启动金资助项目(LBH-Q20080);黑龙江省研究生精品课程建设项目(15141220103)

通信作者

刘勇(1986—), 男, 教授, 主要从事机器学习、大规模机器学习和统计学习理论研究, E-mail: liuyonggsai@ruc.edu.cn

作者简介

郎璇聪(1998—), 女, 博士研究生, 主要从事大规模机器学习研究, E-mail: xuancongl@163.com

文章历史

收稿日期:2023-08-30
“可预测的”非凸在线点对学习的遗憾界
郎璇聪1, 王梅1,2, 刘勇3,4, 李春生1,2    
1. 东北石油大学 计算机与信息技术学院 黑龙江 大庆 163318;
2. 东北石油大学 黑龙江省石油大数据与智能分析重点实验室 黑龙江 大庆 163318;
3. 中国人民大学 高瓴人工智能学院 北京 100049;
4. 中国人民大学 大数据管理与分析方法研究北京市重点实验室 北京 100049
摘要:在线点对学习是指损失函数依赖于一对实例的在线学习模式, 泛化性是在线点对学习理论研究的重要内容。大多数关于在线点对学习使用对抗性损失函数且假设凸性, 然而凸性并不符合通常的实际应用场景。针对非凸在线点对学习, 给出基于稳定性分析的具有“可预测的”损失函数的在线点对学习遗憾界, 作出相应的稳定性分析。通过稳定性和度量在线点对学习泛化能力的常用方式———遗憾之间的关系, 建立“可预测的”非凸损失函数下的遗憾界。证明当学习者(leaner) 能够获得离线神谕(oracle) 时, “可预测的”非凸广义在线点对学习达到遗憾界O(T-3/2)。该结论丰富了非凸在线点对学习的理论研究, 且优于现有理论保证。
关键词在线点对学习    非凸    稳定性    遗憾界    离线神谕    
Regret Bounds for Online Pairwise Learning with Predictable Non-convex Loss Functions
LANG Xuancong1, WANG Mei1,2, LIU Yong3,4, LI Chunsheng1,2    
1. College of Computer and Information Technology, Northeastern Petroleum University, Daqing 163318, China;
2. Heilongjiang Key Laboratory of Petroleum Big Data and Intelligent Analysis, Northeastern Petroleum University, Daqing 163318, China;
3. Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100049, China;
4. Beijing Key Laboratory of Big Data Management and Analysis Method at Renmin University of China, Beijing 100049, China
Abstract: Online pairwise learning is a machine learning model in which the loss functions depend on a pair of instances. Generalization is an important aspect of online pairwise learning theory research. Most of the existing works on online pairwise learning used adversarial loss functions and provided regret bounds only with convex loss functions. However, convexity was not typically applicable in practical scenarios. For non-convex online pairwise learning, the regret bound of online pairwise learning with a " predictable" loss function based on stability analysis was provided and the corresponding stability analysis was conducted. Through the relationship between stability and regret, a common way to measure the generalization ability of online pairwise learning, the regret bound was established with a " predictable" non-convex loss function. It was proved that when the learner obtained an offline oracle, " predictable" non-convex generalized online pairwise learning reached the regret bound of O(T-3/2). This study enriched the theoretical research on non-convex online pairwise learning and was superior to the existing theoretical guarantees.
Key words: online pairwise learning    non-convex    stability    regret bounds    offline optimization oracle    
0 引言

在线点对学习主要应用于排序[1-3]、度量学习[4]等。相较于传统的批量学习,在线点对学习具有高效性、适应性和实时性,更适用于实际应用场景和大规模数据应用,近年来得到了广泛的关注。

在线点对学习的泛化能力是学习性能的重要表现之一,关于在线点对学习泛化性,有研究假设在线环境中有一系列独立同分布的数据且损失函数一致有界的条件,通过在线转批量的转换界,首次提出在线点对学习的超过泛化(excess generalization)界O(T-1)[5-6]。Kar等[7]通过期望对称化改进了上述泛化界。Ying等[8-9]基于非正则的再生核希尔伯特空间(RKHS)的在线点对学习算法,放松了损失函数强凸性和有界性的假设条件,得到遗憾界O(T-1/3logT)。Chen等[10]通过迭代序列更紧的一致约束,提高了算法最后迭代的收敛率,给出遗憾界O(T-1/2)。Guo等[11]基于正则的RKHS,对于特定的铰链损失函数有收敛结果O(T-1/4logT1/2)。Wang等[12]对具有多项式衰减步长和多个正则化参数的在线点对学习算法,给出遗憾界O(T-2/3)。Hu等[13]提出了基于分而治之策略的多正则化的分布式在线点对学习的误差分析。

作为分析泛化性的重要工具之一,稳定性已经广泛应用于单点学习算法的泛化边界研究[14-15]。但是,除了Shen等[16]和Lei等[17]的工作,关于点对学习的稳定性研究较少。前者建立了随机梯度下降(stochastic gradient descent,SGD)在凸和强凸损失函数中的稳定性结果,从而给出泛化边界,并权衡了SGD的泛化误差和优化误差,后者提供了一个更细化的稳定性分析并改进泛化边界为O(γlogn)。

在线点对学习的理论研究中,通常假设损失函数是凸或强凸且基于对抗性选择的,关于非凸损失函数的研究较为匮乏。如果学习者(learner)接收一个已知的“可预测的”损失函数序列,“可预测的”是指具有关于损失函数序列先验知识的在线学习模式,比如方差[18]和路径长度的界限[19],那么,遗憾界优于对抗性选择下的界[20]

针对上述问题,本文提出了基于稳定性分析的“可预测的”非凸在线点对学习的遗憾界。本文主要工作有:1) 把原始的在线点对学习改写为广义的在线点对学习框架,并基于广义在线点对学习提出“可预测的”非凸广义在线点对学习算法;2) 建立了具有“可预测的”非凸损失函数的广义在线点对学习的稳定性分析;3) 根据稳定性和遗憾之间的关系,证明了具有“可预测的”非凸损失函数的广义在线点对学习可达到最优的遗憾界O(T-3/2)。

1 在线点对学习

假设一个样本空间Z=X × Y,其中:XRd是一个输入空间; YR是一个输出空间; xXyY分别是输入和输出。在线点对学习是学习一个预测函数$ p: \boldsymbol{X} \times \boldsymbol{X} \rightarrow \mathbf{R}, p(\boldsymbol{w} \in \boldsymbol{W})$在一对实例(z, z′)上的性能由非负损失函数(w, z, z′)度量,即

$ \boldsymbol{w}_{t} \in \operatorname{argmin}\left\{\sum\limits_{i=1}^{t-1} \ell_{i}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right), \boldsymbol{w} \in \boldsymbol{W}\right\} 。$

排序和度量学习是点对学习经典的应用。对于排序,函数p: XY以与输出一致的方式对实例z =(x, y), z′=(x′, y ′)进行排序,即若yy,则p(x)<p(x ′)。点对学习问题排序的形式化表述为$ \ell\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)=\psi\left(\operatorname{sgn}\left(\boldsymbol{y}-\boldsymbol{y}^{\prime}\right)\left(p(\boldsymbol{x})-p\left(\boldsymbol{x}^{\prime}\right)\right)\right)$,其中:sgn是符号函数; ψ可以是铰链损失$ \psi(t)=\max \{1- t, 0\}$,也可以是逻辑损失$ \psi(t)=\log (1+\exp (-t))$。对于 Y ={-1, 1}的监督的度量学习,距离函数对具有相同标签的实例是相似的,而具有不同标签的实例彼此分离,一个常用的距离函数是$ p\left(\boldsymbol{x}, \boldsymbol{x}^{\prime}\right)=\left\langle\boldsymbol{w}, \left(\boldsymbol{x}-\boldsymbol{x}^{\prime}\right)\left(\boldsymbol{x}-\boldsymbol{x}^{\prime}\right)^{\mathrm{T}}\right\rangle$。点对学习问题度量学习的形式化表述为$ \ell\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)=\psi\left(\tau\left(\boldsymbol{y}, \boldsymbol{y}^{\prime}\right)(1-p(\boldsymbol{x}, \boldsymbol{x}^{\prime}\right.)))$,如果 y = y ′,则τ(yy ′)=1,否则为-1。

2 “可预测的”非凸广义在线点对学习 2.1 广义在线点对学习

广义在线点对学习表示为

$ \boldsymbol{w}_t \in \operatorname{argmin}\left\{\sum\limits_{i=1}^{t-1} \ell_i\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\boldsymbol{\sigma}^{\mathrm{T}} \boldsymbol{w}: \boldsymbol{w} \in \boldsymbol{W}\right\} 。$

广义在线点对学习中包含一个σ项,它是一个从exp(η)(参数为η的指数分布)采样的随机噪声。引入随机噪声项σ避免过拟合,提高在线点对学习的泛化能力。因此,广义在线点对学习是鲁棒的在线点对学习更广义的在线框架。FTRL(follow the regularized leader)是一种广泛用于凸假设的算法[21]。当广义在线点对学习中的随机噪声为μw时,FTRL即为广义在线点对学习的一个特例,

$ \begin{aligned} & \boldsymbol{w}_t \in \operatorname{argmin}\left\{\sum\limits_{i=1}^{t-1} \ell_i\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\mu\|\boldsymbol{w}\|^2\right\}, \\ & \mu \approx T^\rho, \rho \in(0, 1) 。\end{aligned} $

基于广义在线点对学习框架,学习者接收的损失序列是“可预测的”这种乐观变体,可以得到更优的遗憾界。在某些应用中,损失函数可能不是对抗性的,而是“可预测的”。文献[20]表明,当带有自洽正则器的FTRL算法和在线镜像下降(online mirroring descent,OMD)类型的算法包含一组可预测过程时,可以获得更好的遗憾界。在介绍广义在线点对学习算法之前,简要介绍算法涉及的基本概念。

2.2 离线神谕

有专家建议的在线点对学习的预测是学习者和对手之间的重复游戏,其特征是学习者从含有N个专家的有限集合H中进行选择,对手从一组决策集合Y中选择,损失函数tL。首先,在游戏开始前,对手从Y中选择一个任意的决策序列y1, y2, …,在游戏的每一轮t=1, 2, …,学习者必须选择(可能是随机的)一个专家htH,然后对手揭示其决策ytY,学习者遭受损失。学习者每接收一对实例 z, z ′∈ Z后的目标是使学习者遭受的损失与最优假设 w*W 之间的累积差异尽可能小,因此,遗憾是一种衡量在线点对学习性能的标准,定义为

$ \mathcal{R}_A(T)=\sum\limits_{t=1}^T\left[\ell_t\left(\boldsymbol{w}_t, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_t\left(\boldsymbol{w}^*, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] \text { 。} $

基于神谕的学习模型可称之为“可优化的专家”,该模型本质上是经典的在线学习模型。在“可优化的专家”模型中,假设最初学习者对损失函数是未知的,并允许学习者通过神谕来获取。离线神谕模型通过在线学习者提交一系列的损失函数,返回使累积损失最小的假设。下面给出离线神谕模型的近似神谕模型定义。

定义1[22]一个离线神谕模型,其输入是一个损失函数ℓ: WR 和一个d维向量 σ,输出是一个$ \boldsymbol{w} \mapsto \ell\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\langle\boldsymbol{\sigma}, \boldsymbol{w}\rangle$的近似最小值。若它返回的 w*W 满足

$ \ell\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}^{*}\right\rangle \leqslant \\ \inf _{w \in W}\left[\ell\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\langle\boldsymbol{\sigma}, \boldsymbol{w}\rangle\right]+\left(\alpha+\beta\|\boldsymbol{\sigma}\|_{1}\right), $

则称其为“(α, β)-近似神谕”。

方便起见,将一个(α, β)-近似神谕记为ORALα, β(-〈 σ, ·〉)。使用近似神谕是因为不可能知道一个假设是全局最小值或仅是一个鞍点。ORAL可以输出一个w而不需要保证其最优性。在大多数情况下,w只是近似最优。离线神谕模型(返回w* ∈argmin (w))与(α, β)-近似神谕模型的区别在于,后者有一个关于变量α,β,σ 的扰动项。在指定α,β的大小时可获得更好的遗憾界。近似神谕在在线单点学习中的应用包括以下实例。

1) Query-GAN算法。当采用近似神谕时,$ \alpha=\frac{\hat{R}+R_{d}(T)}{T}$ ($ \hat{R}$为累积遗憾),对于非凸损失函数以T-1/2的速度收敛[23]

2) FTRL和FTL(follow the leader)算法。当采用近似神谕时,α=T-1/2,对于半凸损失函数可以达到T-1的遗憾界[24]

在实例1)和2)中,β=0,只有α作为近似神谕的参数。

2.3 “可预测的”非凸广义在线点对学习算法

假设损失函数是“可预测的”且学习者可以访问一个(α, β)-近似离线优化神谕。令gt[1t-1]为第t轮对t的预测,g1=0。给定gt,学习者可以访问一个(α, β)-近似离线优化神谕

$ \boldsymbol{w}_t=O R A L_{\alpha, \beta}\left(\sum\limits_{i=1}^{t-1} \ell_i+g_t-\left\langle\boldsymbol{\sigma}_t, \cdot\right\rangle\right), $

则“可预测的”非凸广义在线点对学习算法如算法1。

算法1 “可预测的”非凸广义在线点对学习算法

输入:η,ORALα, β,预测序列gt[1t-1]。

输出:w *W

1) for t=1,…,T do

2) $ \left\{\boldsymbol{\sigma}_{t, j}\right\}_{j=1}^d \stackrel{\text { i.i. } d}{\sim} \exp (\boldsymbol{\eta}) / *$生成随机向量σt*/

3) 学习者在t时刻预测

4) $ \quad \boldsymbol{w}_{t}=O R A L_{\alpha, \beta}\left(\sum\limits_{i=1}^{t-1} \ell_{i}+g_{t}-\left\langle\boldsymbol{\sigma}_{t}, \cdot\right\rangle\right) $

5) 学习者遭受损失t

6) end for

序列gt相当于在线点对学习中的先验知识,显然当gt近似t时,这种非凸且“可预测的”乐观情况会有较小的遗憾。

3 稳定性分析

算法稳定性可以衡量一个学习算法的输出对训练数据集微小变化的敏感性,是机器学习理论分析中一个重要的概念。对于批量学习假设中独立同分布的样本,稳定性是可学习性的一个关键特性。类似地,稳定性对于在线学习的可学习性具有同样作用。除常用的一致稳定性以外,还有定义2所述的平均稳定性。接下来用 aT b 或〈 a, b 〉表示 ab 之间的欧几里得点积; ‖ · ‖表示二范数; ‖ · ‖p表示一个特定的p范数。若$ |f(\boldsymbol{x})-f(\boldsymbol{y})| \leqslant L\|\boldsymbol{x}-\boldsymbol{y}\| ,\forall \boldsymbol{x}, \boldsymbol{y} \in C$则称f: CR是关于‖ · ‖范数L-Lipschitz连续的。

定义2[17, 25]假设学习者遭受的损失序列是L-Lipschitz连续的。若$ \exists \gamma > 0$使得

$ \frac{1}{T} \sum\limits_{t=1}^T \mathbb{E}\left\|\ell_t\left(\boldsymbol{w}_t, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_{t+1}\left(\boldsymbol{w}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right\| \leqslant L \gamma, $ (1)

则称算法Aγ-平均稳定的。

显然,平均稳定性比一致稳定性$ \left(\| \ell_{t}\left(\boldsymbol{w}_{t}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\right.\left.\ell_{t+1}\left(\boldsymbol{w}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \| \leqslant L \gamma\right)$更弱,因为它成立的条件仅仅是 wt的期望值和平均值满足不等式(1)。本文主要研究平均稳定性,所有定理采用的也是平均稳定性,以此作为理论分析中一种假设条件的放松。

稳定性分析首先将广义在线点对学习的期望遗憾与平均稳定性建立联系。如定理1所示,构建了“可预测的”非凸广义在线点对学习的期望遗憾与平均稳定性的相关性。

定理1$ \overline{\boldsymbol{w}}_t$$ \sum\limits_{i=1}^{t-1} \ell_i(\boldsymbol{w})-\langle\sigma, \boldsymbol{w}\rangle$最小化,假设D为决策集$ W \subseteq \mathbf{R}^d$的界,学习者遭受的损失满足1范数的Lt-Lipschitz连续性。则“可预测的”非凸广义在线点对学习的上界为

$ \begin{aligned} & \frac{1}{T} \mathbb{E}\left[\left[\sum\limits_{t=1}^T \ell_t\left(\boldsymbol{w}_t, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\inf _{\boldsymbol{w} \in \boldsymbol{W}} \sum\limits_{t=1}^T \ell_t\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] \leqslant\right. \\ & \sum\limits_{t=1}^T \frac{L_t}{T} \underbrace{\mathbb{E}\left[\left\|\boldsymbol{w}_t-\overline{\boldsymbol{w}}_{t+1}\right\|_1\right]}_{\text {Stability }}+\frac{d(\beta T+D)}{\eta T}+\alpha_{。} \end{aligned} $ (1)

证明令$ \Delta_{t}(\boldsymbol{w})=\ell_{t}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-g_{t}(\boldsymbol{w}), \gamma(\boldsymbol{\sigma})=$\alpha+\beta\|\boldsymbol{\sigma}\|_{1}$。对于任意的w*W

$ \begin{aligned} & \sum\limits_{t=1}^{T}\left[\ell_{t}\left(\boldsymbol{w}_{t}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_{t}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]= \\ & \sum\limits_{t=1}^{T}\left[\Delta_{t}\left(\boldsymbol{w}_{t}\right)-\Delta_{t}\left(\overline{\boldsymbol{w}}_{t+1}\right)\right]+\sum\limits_{t=1}^{T}\left[g_{t}\left(\boldsymbol{w}_{t}\right)-g_{t}\left(\overline{\boldsymbol{w}}_{t+1}\right)\right]+ \\ & \sum\limits_{t=1}^{T}\left[\ell_{t}\left(\overline{\boldsymbol{w}}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_{t}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] \leqslant \\ & \sum\limits_{t=1}^{T} L_{t}\left\|\boldsymbol{w}_{t}-\overline{\boldsymbol{w}}_{t+1}\right\|_{1}+\sum\limits_{t=1}^{T}\left[g_{t}\left(\boldsymbol{w}_{t}\right)-g_{t}\left(\overline{\boldsymbol{w}}_{t+1}\right)\right]+ \\ & \sum\limits_{t=1}^{T}\left[\ell_{t}\left(\overline{\boldsymbol{w}}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_{t}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] 。\end{aligned} $

归纳法证明,对于任意的T, w*W

$ \begin{aligned} & \sum\limits_{t=1}^{T}\left[g_{t}\left(\boldsymbol{w}_{t}\right)-g_{t}\left(\overline{\boldsymbol{w}}_{t+1}\right)\right]+\sum\limits_{t=1}^{T}\left[\ell_{t}\left(\overline{\boldsymbol{w}}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\right. \\ & \left.\ell_{t}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] \leqslant\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{2}-\boldsymbol{w}^{*}\right\rangle+\gamma(\boldsymbol{\sigma})(T-1) \end{aligned} $

均成立。

T=1步骤:g1=0,由于w21(w, z, z ′)-〈 σ, w 〉的近似最小化,得

$ \begin{aligned} & \ell_{1}\left(\overline{\boldsymbol{w}}_{2}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{2}\right\rangle \leqslant \ell_{1}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}^{*}\right\rangle, \\ & \forall \boldsymbol{w}^{*} \in \boldsymbol{W}_{。} \end{aligned} $

因此,$ \ell_{1}\left(\overline{\boldsymbol{w}}_{2}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\ell_{1}\left(\boldsymbol{w}^{*}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \leqslant\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{2}-\right.\left.\boldsymbol{w}^{*}\right\rangle$

归纳步骤:假设归纳对所有TT0-1成立,则它对T0也成立

$ \begin{aligned} & \sum\limits_{t=1}^{r_0}\left[g_t\left(\boldsymbol{w}_t\right)-g_t\left(\overline{\boldsymbol{w}}_{t+1}\right)\right]+\sum\limits_{t=1}^{r_0} \ell_t\left(\overline{\boldsymbol{w}}_{t+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \leqslant \\ & {\left[\sum\limits_{t=1}^{r_0-1} \ell_t\left(\boldsymbol{w}_{T_0}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2-\boldsymbol{w}_{T_0}\right\rangle+\gamma(\boldsymbol{\sigma})\left(T_0-2\right)\right]+} \\ & {\left[g_{T_0}\left(\boldsymbol{w}_{T_0}\right)-g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)+\ell_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]=} \\ & {\left[\sum\limits_{t=1}^{r_0-1} \ell_t\left(\boldsymbol{w}_{T_0}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_{T_0}\left(\boldsymbol{w}_{T_0}\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}_{T_0}\right\rangle\right]+} \\ & {\left[\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2\right\rangle-g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)+\ell_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]+} \\ & \gamma(\boldsymbol{\sigma})\left(T_0-2\right) \leqslant \\ & {\left[\sum\limits_{t=1}^{r_0-1} \ell_t\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\right.} \\ & \left.g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)-\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{T_0+1}\right\rangle\right]+\left[\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2\right\rangle-g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)+\right. \\ & \left.\ell_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]+\gamma(\boldsymbol{\sigma})\left(T_0-1\right)= \\ & {\left[\sum\limits_{t=1}^{r_0} \ell_t\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{T_0+1}\right\rangle\right]+} \\ & \left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2\right\rangle+\gamma(\boldsymbol{\sigma})\left(T_0-1\right) \leqslant \\ & \sum\limits_{t=1}^{r_0} \ell_t\left(\boldsymbol{w}^*, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+ \\ & \left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2-\boldsymbol{w}^*\right\rangle+\gamma(\boldsymbol{\sigma})\left(T_0-1\right) \text { 。} \end{aligned} $

其中:$ \left[\sum\limits_{t=1}^{T_{0}-1} \ell_{t}\left(\boldsymbol{w}_{T_{0}}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{2}-\boldsymbol{w}_{T_{0}}\right\rangle+\right. \left.\gamma(\boldsymbol{\sigma})\left(T_{0}-2\right)\right]+\left[g_{T_{0}}\left(\boldsymbol{w}_{T_{0}}\right)-g_{T_{0}}\left(\overline{\boldsymbol{w}}_{T_{0}+1}\right)+\right.\left.\ell_{T_{0}}\left(\overline{\boldsymbol{w}}_{T_{0}+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]$是由于对任意的T≤T0-1均成立;$\left[\sum\limits_{t=1}^{r_0-1} \ell_t\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)-\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{T_0+1}\right\rangle\right]+ {\left[\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_2\right\rangle-g_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}\right)+\ell_{T_0}\left(\overline{\boldsymbol{w}}_{T_0+1}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right]+} \gamma(\boldsymbol{\sigma})\left(T_0-1\right) $是由于wT0的近似最优性。

综上所述,得到“可预测的”非凸广义在线点对学习有上界

$ \begin{aligned} & \sum\limits_{t=1}^{T} \ell_{t}\left(\boldsymbol{w}_{t}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\inf _{w \in W} \sum\limits_{i=1}^{T} \ell_{t}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \leqslant \\ & \sum\limits_{t=1}^{T} \boldsymbol{L}_{t}\left\|\boldsymbol{w}_{t}-\overline{\boldsymbol{w}}_{t+1}\right\|_{1}+\frac{d(\beta T+D)}{\eta}+\alpha T\\ \end{aligned} $

证毕。

由定理1建立了稳定性和“可预测的”非凸广义在线点对学习的遗憾界之间的关系,当平均稳定性的上界可得时,遗憾也可以实现收敛。下面求稳定性$ \left\|\boldsymbol{w}_{t}-\overline{\boldsymbol{w}}_{t+1}\right\|_{1}$的上界。

定理2wt(σ)是“可预测的”非凸广义在线点对学习在第t次的预测,对于任意的c>0有以下单调性成立

$ \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}+c \boldsymbol{e}_i\right) \geqslant \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\frac{2\left(\alpha+\beta\|\boldsymbol{\sigma}\|_1\right)}{c}-\beta, $

其中: σ为随机扰动;ei表示第i个标准基向量;wt, i表示wti坐标上的假设。

证明$ \ell_{1: t}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)=\sum_{i=1}^{t} \ell_{i}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right), \boldsymbol{\sigma}^{\prime}=\boldsymbol{\sigma}+c \boldsymbol{e}_{i}, \gamma(\boldsymbol{\sigma})=\alpha+\beta\|\boldsymbol{\sigma}\|_{1}$为离线神谕的近似误差,由wt(σ)的近似最优化,可得

$ \begin{aligned} & \ell_{1: t-1}\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle \leqslant \\ & \ell_{1: t-1}\left(\boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right)\right)- \\ & \left\langle\boldsymbol{\sigma}, \boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right)\right\rangle+\gamma(\boldsymbol{\sigma})= \\ & \ell_{1: t-1}\left(\boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+ \\ & g_t\left(\boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right)\right)-\left\langle\boldsymbol{\sigma}^{\prime}, \boldsymbol{w}_t\left(\boldsymbol{\sigma}^{\prime}\right)\right\rangle+c \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)+\gamma(\boldsymbol{\sigma}) \leqslant \\ & \ell_{1: t-1}\left(\boldsymbol{w}_t\left(\boldsymbol{\sigma}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma})\right)-\right. \\ & \left\langle\boldsymbol{\sigma}^{\prime}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle+c \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)+\gamma(\boldsymbol{\sigma})+\gamma\left(\boldsymbol{\sigma}^{\prime}\right)= \\ & \ell_{1: t-1}\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle+ \\ & c\left(\boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)-\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})\right)+\gamma(\boldsymbol{\sigma})+\gamma\left(\boldsymbol{\sigma}^{\prime}\right), \end{aligned} $

其中:$ \ell_{1: t-1}\left(\boldsymbol{w}_{t}\left(\boldsymbol{\sigma}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_{t}\left(\boldsymbol{w}_{t}(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}^{\prime}\right.\right. ,\left.\boldsymbol{w}_{t}(\boldsymbol{\sigma})\right\rangle+c \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)+\gamma(\boldsymbol{\sigma})+\gamma\left(\boldsymbol{\sigma}^{\prime}\right)$是由于wt(σ′)的近似最优化,结合第一项和最后一项,得

$ \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right) \geqslant \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\frac{2 \gamma(\boldsymbol{\sigma})}{c}-\beta, $

证毕。

定理2证明了“可预测的”非凸广义在线点对学习的稳定性。通过观察扰动向量的变化对广义在线点对学习的输出产生的影响,表明了其在一维情况下的单调性,由于在线学习中稳定性是指相邻两次迭代得到的假设之间的距离,所以定理2所得即为一维情况下的稳定性。

定理3 wt(σ)为“可预测的”非凸广义在线点对学习在第t次迭代时的假设,假设$ \| \boldsymbol{w}_{t}(\boldsymbol{\sigma})- \overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}) \|_{1} \leqslant 10 d\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right| $,对于任意的$ \boldsymbol{\sigma}^{\prime}=\boldsymbol{\sigma}+100 L_{t} d \boldsymbol{e}_{i}$可得

$ \begin{aligned} & \min \left(\boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right), \overline{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)\right) \geqslant \max \left(\boldsymbol{w}_{t, i}(\boldsymbol{\sigma}), \overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right)- \\ & \frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right|-\frac{3\left(\alpha+\beta\|\boldsymbol{\sigma}\|_1\right)}{100 L_t d}-\beta, \end{aligned} $

其中:σ为随机扰动; ei表示第i个标准基向量; wt, i表示wti坐标上的假设。

证明$ \ell_{1: t}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)=\sum_{i=1}^{t} \ell_{i}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) , \gamma(\boldsymbol{\sigma})=\alpha+\beta\|\boldsymbol{\sigma}\|_{1}$为离线神谕的近似误差,由wt(σ)的近似最优化,可得

$ \begin{aligned} & \ell_{1: t-1}\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle+\ell_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \leqslant \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}, \bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle+ \\ & \ell_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-g_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma})\right)+\gamma(\boldsymbol{\sigma}) \leqslant \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}, \bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle+ \\ & \ell_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-g_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)+ \\ & L_t\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\|_1+\gamma(\boldsymbol{\sigma}) \leqslant \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)-\left\langle\boldsymbol{\sigma}, \bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle+ \\ & \ell_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-g_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)+ \\ & 10 L_t \boldsymbol{d}\left|\boldsymbol{\boldsymbol{w}}_{t, i}(\boldsymbol{\sigma})-\bar{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right|+\gamma(\boldsymbol{\sigma}), \end{aligned} $ (3)

其中:$ \ell_{1: t-1}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+g_{t}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)-\langle\boldsymbol{\sigma}, \left.\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle+\ell_{t}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-g_{t}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right)+ \boldsymbol{L}_{\boldsymbol{t}}\left\|\boldsymbol{w}_{\boldsymbol{t}}(\boldsymbol{\sigma})-\boldsymbol{w}_{\boldsymbol{t}+1}(\boldsymbol{\sigma})\right\|_{1}+\gamma(\boldsymbol{\sigma})$是由于t(·)的Lipschitz连续性;$ \ell_{1: t-1}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+ g_{t}\left(\bar{w}_{t+1}(\sigma)\right)-\left\langle\sigma, \bar{w}_{t+1}(\sigma)\right\rangle+\ell_{t}\left(\bar{w}_{t+1}(\sigma), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)- g_{t}\left(\bar{w}_{t+1}(\sigma)\right)+10 L_{t} d\left|w_{t, i}(\sigma)-\bar{w}_{t+1, i}(\sigma)\right|+\gamma(\sigma)$是由于$ \left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_{1}$上的假设。由wt+1(σ′)的近似最优化,得

$ \begin{aligned} & \ell_{1: t-1}\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle+\ell_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)= \\ & \ell_{1: t-1}\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}^{\prime}, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle+ \\ & \ell_t\left(\boldsymbol{w}_t(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+\left\langle 100 L_t d e_i, \boldsymbol{w}_t(\boldsymbol{\sigma})\right\rangle \geqslant \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right)-\left\langle\boldsymbol{\sigma}^{\prime}, \bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}\right)\right\rangle+ \\ & \ell_t\left(\bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}\right), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+100 L_t d \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})= \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}\right), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}\right)\right\rangle+ \\ & \ell_t\left(\bar{\boldsymbol{w}}_{t+1}\left(\boldsymbol{\sigma}^{\prime}\right), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+100 L_t d\left(\boldsymbol{\boldsymbol{w}}_{t, i}(\boldsymbol{\sigma})-\bar{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)\right) \geqslant \\ & \ell_{1: t-1}\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\left\langle\boldsymbol{\sigma}, \bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle+ \\ & \ell_t\left(\bar{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+100 L_t d\left(\boldsymbol{\boldsymbol{w}}_{t, i}(\boldsymbol{\sigma})-\bar{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)\right), \end{aligned} $ (4)

其中: $ \quad \ell_{1: t-1}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right) \quad-\quad\left\langle\boldsymbol{\sigma}, \overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma})\right\rangle \quad+ \ell_{t}\left(\overline{\boldsymbol{w}}_{t+1}(\boldsymbol{\sigma}), \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)+100 L_{t} d\left(\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\overline{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)\right)$是由于wt+1(σ)的近似最优化。结合式(3)~(4),得

$ \begin{aligned} & \overline{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)-\boldsymbol{w}_{t, i}(\boldsymbol{\sigma}) \geqslant \\ & -\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right|-\frac{\gamma(\boldsymbol{\sigma})}{100 L_t d}。\end{aligned} $ (5)

同理

$ \begin{aligned} & \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma}) \geqslant \\ & -\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma})\right|-\frac{\gamma(\boldsymbol{\sigma})}{100 L_t d}, \end{aligned} $ (6)

由定理2的单调性,得

$ \begin{gathered} \overline{\boldsymbol{w}}_{t+1, i}\left(\boldsymbol{\sigma}^{\prime}\right)-\overline{\boldsymbol{w}}_{t+1, i}(\boldsymbol{\sigma}) \geqslant 0, \\ \boldsymbol{w}_{t, i}\left(\boldsymbol{\sigma}^{\prime}\right)-\boldsymbol{w}_{t, i}(\boldsymbol{\sigma}) \geqslant-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_t d}-\beta, \end{gathered} $ (7)

结合式(4)~(7),得证。

定理3证明了d维情况下“可预测的”非凸广义在线点对学习的稳定性。虽然d维区别于一维的单调性证明会更具挑战性,但可以通过分别改变扰动项的每个坐标有效地将分析减少到一维的情况,同理于定理2中由单调性表明在线点对学习的稳定性,可得d维情况下的稳定性。

4 “可预测的”非凸广义在线点对学习的遗憾界

定理4假设D为决策集$ W \subseteq \mathbf{R}^d$的界。假设预测序列gt对于任意的t∈[T],(gt-ℓt)都满足1范数下的Lt-Lipschitz连续性。学习者可访问(α,β)-近似优化神谕。对于任意的η,“可预测的”非凸广义在线点对学习的预测都满足以下遗憾界

$ \begin{aligned} & \mathbb{E}\left[\frac{1}{T} \sum\limits_{t=1}^{T} \ell_{t}\left(\boldsymbol{w}_{t}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)-\frac{1}{T} \inf _{\boldsymbol{w} \in \boldsymbol{W}} \sum\limits_{t=1}^{T} \ell_{t}\left(\boldsymbol{w}, \boldsymbol{z}, \boldsymbol{z}^{\prime}\right)\right] \leqslant \\ & O\left(\eta d^{2} D \sum\limits_{t=1}^{T} \frac{L_{t}^{2}}{T}+\frac{d(\beta T+D)}{\eta T}+\alpha+\beta d \sum\limits_{t=1}^{T} \frac{L_{t}}{T}\right) 。\end{aligned} $

证明使用与定理2和定理3中相同的符号定义,$ \mathbb{E}\left[\left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_{1}\right]$也记作

$ \begin{align*} & \mathbb{E}\left[\left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_{1}\right]= \\ & \sum\limits_{i=1}^{d} \mathbb{E}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right], \tag{8} \end{align*} $ (8)

因此,由$ \mathbb{E}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right], \forall i \in[d]$的界,可得$ \mathbb{E}\left[\left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_{1}\right]$的界。对于任意的i∈[d],定义$ \mathbb{E}_{-i}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right]$

$ \begin{aligned} & \mathbb{E}_{-i}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right]:= \\ & \mathbb{E}\left[\mid \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma}) \|\left\{\boldsymbol{\sigma}_{j}\right\}_{j \neq i}\right], \end{aligned} $

其中:σjσ在坐标等于j的值。

$ \boldsymbol{w}_{\text {max }, i}(\boldsymbol{\sigma})=\max \left(\boldsymbol{w}_{t, i}(\boldsymbol{\sigma}), \boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right)$。类似地,令$ \boldsymbol{w}_{\text {min }, i}(\boldsymbol{\sigma})=\min \left(\boldsymbol{w}_{t, i}(\boldsymbol{\sigma}), \boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right)$。则

$ \begin{aligned} & \mathbb{E}_{-i}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right]= \\ & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-\mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right], \end{aligned} $

定义

$ \begin{aligned} & \varepsilon=\left\{\boldsymbol{\sigma}:\left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{x}_{t+1}(\boldsymbol{\sigma})\right\|_{1} \leqslant\right. \\ & \left.10 d\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right\}, \end{aligned} $

对下式中的T1T2求取下界

$ \begin{align*} & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant \\ & \mathbb{P}\left(\boldsymbol{\sigma}_{i}<100 L_{t} d\right) \underbrace{\mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma}) \mid \boldsymbol{\sigma}_{i}<100 L_{t} d\right]}_{T_{1}}+ \\ & \underbrace{\mathbb{P}\left(\boldsymbol{\sigma}_{i} \geqslant 100 L_{t} d\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma}) \mid \boldsymbol{\sigma}_{i} \geqslant 100 L_{t} d\right]}_{T_{2}}, \tag{9} \end{align*} $ (9)

由于第i个坐标的域位于某个长度为D的区间内,并且由于T1$ \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]$是这个区间的点,它们的差值被D所界限。所以T1的下界为$ \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D_{0}$。将T2重新定义为

$ T_{2}=\mathbb{P}\left(\boldsymbol{\sigma}_{i} \geqslant 100 L_{t} d\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma}) \mid \boldsymbol{B}_{i} \geqslant 100 L_{t} d\right]=$ $\int\limits_{\boldsymbol{\sigma}_{i}=100 L_{t}^{d}}^{\infty} \boldsymbol{w}_{\text {min }, i}(\boldsymbol{\sigma}) \boldsymbol{P}\left(\boldsymbol{\sigma}_{i}\right) \mathrm{d} \boldsymbol{\sigma}_{i}=\int\limits_{\boldsymbol{\sigma}_{i}=100 L_{t}^{d}}^{\infty} \boldsymbol{w}_{\text {min }, i}(\boldsymbol{\sigma}) \eta \boldsymbol{e}^{-\eta \sigma_{i}} \mathrm{~d} \boldsymbol{\sigma}_{i} $

改变上述积分中的一个变量,令σi=σi+100Ltd\boldsymbol${\sigma}^{\prime}=\left[\boldsymbol{\sigma}_{1}, \cdots, \boldsymbol{\sigma}_{i-1}, \boldsymbol{\sigma}_{i}^{\prime}, \boldsymbol{\sigma}_{i+1}, \cdots\right] $是将在第i个坐标的σ替换为σi得到的向量,得

$ \begin{aligned} & \int\limits_{\boldsymbol{\sigma}_{i}=100 L_{i} d}^{\infty} \boldsymbol{w}_{\min , i}(\boldsymbol{\sigma}) \eta \mathrm{e}^{-\eta \boldsymbol{\sigma}_{i}} \mathrm{~d} \boldsymbol{\sigma}_{i}= \\ & \int\limits_{\boldsymbol{\sigma}_{i}^{\prime}=0}^{\infty} \boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}^{\prime}+100 L_{t} d \boldsymbol{e}_{i}\right) \eta \boldsymbol{e}^{-\eta\left(\boldsymbol{\sigma}_{i}^{\prime}+100 L_{t} d\right)} \mathrm{d} \boldsymbol{\sigma}_{i}^{\prime}= \\ & \boldsymbol{e}^{-100 \eta L_{t} d} \int\limits_{\boldsymbol{\sigma}_{i}^{\prime}=0}^{\infty} \boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}^{\prime}+100 L_{t} d \boldsymbol{e}_{i}\right) \eta \boldsymbol{e}^{-\eta \boldsymbol{\sigma}_{i}^{\prime}} \mathrm{d} \boldsymbol{\sigma}_{i}^{\prime}= \\ & \boldsymbol{e}^{-100 \eta L_{t} d} \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}^{\prime}+100 L_{t} d \boldsymbol{e}_{i}\right)\right] \text { 。} \end{aligned} $

$ T_{2}=\boldsymbol{e}^{-100 \eta L_{t}^{d}} \mathbb{E}_{-i}\left[\boldsymbol{w}_{\text {min }, i}\left(\boldsymbol{\sigma}+100 L_{t} d \boldsymbol{e}_{i}\right)\right]$T1T2的下界代入式(9)中,可得

$ \begin{aligned} & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant \\ & \left(1-\exp \left(-100 \eta L_{t} d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_{t} d\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}+100 L_{t} d \boldsymbol{e}_{i}\right)\right] 。\end{aligned} $

$ \mathbb{E}_i\left[\boldsymbol{w}_{\text {min }, i}(\boldsymbol{\sigma})\right]$下界为

$ \begin{aligned} & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant \\ & \left(1-\exp \left(-100 \eta L_t d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_t d\right) \mathbb{P}_{-i}(\varepsilon) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}+100 L_t d \boldsymbol{e}_i\right) \mid \varepsilon\right]+ \\ & \exp \left(-100 \eta L_t d\right) \mathbb{P}_{-i}\left(\varepsilon^c\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}\left(\boldsymbol{\sigma}+100 L_t d \boldsymbol{e}_i\right) \mid \varepsilon^c\right], \end{aligned} $

其中:$ \mathbb{P}_{-i}(\varepsilon):=\mathbb{P}\left(\varepsilon \mid\left\{\boldsymbol{\sigma}_j\right\}_{j \neq i}\right)$。由定理2和定理3证明的单调性,得$ \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right]$下界。$ \gamma(\boldsymbol{\sigma})= \alpha+\beta\|\boldsymbol{\sigma}\|_{1}$,则

$ \begin{aligned} & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant \\ & \left(1-\exp \left(-100 \eta L_t d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}(\varepsilon) \mathbb{E}_{-i}\left[\left.\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\frac{1}{10} \right\rvert\, \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\right. \\ & \left.\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\left|-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\beta\right| \varepsilon\right]+ \\ & \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}\left(\varepsilon^{c}\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})-\frac{2 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\right. \\ & \left.\beta \mid \varepsilon^{c}\right] \geqslant \\ & \left(1-\exp \left(-100 \eta L_{t} d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}(\varepsilon) \mathbb{E}_{-i}\left[\left.\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\frac{1}{10} \right\rvert\, \boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\right. \\ & \left.\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\left|-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\beta\right| \varepsilon\right]+ \\ & \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}\left(\varepsilon^{c}\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\right. \\ & \left.\left.\left.\frac{1}{10 d}\left\|\boldsymbol{w}_{t}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|\right|_{1}-\frac{2 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\beta \right\rvert\, \varepsilon^{c}\right], \end{aligned} $

其中:不等式$ \begin{aligned} \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant {\left(1-\exp \left(-100 \eta L_{t} d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+} \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}(\varepsilon) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})- \right. \end{aligned} $ $ \left.\left.\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\beta \right\rvert\, \varepsilon\right]+ \exp \left(-100 \eta L_{t} d\right) \mathbb{P}_{-i}\left(\varepsilon^{c}\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})-\frac{2 \gamma(\boldsymbol{\sigma})}{100 L_{t} d}-\beta \mid \varepsilon^{c}\right]$ 来自于定理2和定理3,不等式

$ \begin{aligned} & \left(1-\exp \left(-100 \eta L_t d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_t d\right) \mathbb{P}_{-i}(\varepsilon) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\right. \\ & \left.\left.\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_t d}-\beta \right\rvert\, \varepsilon\right]+ \\ & \exp \left(-100 \eta L_t d\right) \mathbb{P}_{-i}\left(\varepsilon^c\right) \mathbb{E}_{-i}^{\prime}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\right. \\ & \left.\left.\frac{1}{10 d}\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_1-\frac{2 \gamma(\boldsymbol{\sigma})}{100 L_t d}-\beta \right\rvert\, \varepsilon^c\right], \end{aligned} $

来自|εc的定义。由$ \mathbb{P}_{-i}(\varepsilon) \leqslant 1$,得

$ \begin{aligned} & \mathbb{E}_{-i}\left[\boldsymbol{w}_{\min , i}(\boldsymbol{\sigma})\right] \geqslant \\ & \left(1-\exp \left(-100 \eta L_t d\right)\right)\left(\mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]-D\right)+ \\ & \exp \left(-100 \eta L_t d\right) \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_t d}-\beta\right]- \\ & \exp \left(-100 \eta L_t d\right) \mathbb{E}_{-i}\left[\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|+\right. \\ & \left.\frac{1}{10 d}\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_1\right] \geqslant \mathbb{E}_{-i}\left[\boldsymbol{w}_{\max , i}(\boldsymbol{\sigma})\right]- \\ & 100 \eta L_t d D-\frac{3 \gamma(\boldsymbol{\sigma})}{100 L_t d}-\beta-\mathbb{E}_{-i}\left[\frac{1}{10}\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})\right|-\right. \\ & \left.\left|\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|+\frac{1}{10 d}\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_1\right]_{\circ} \end{aligned} $

由于exp(w)≥1+ w, 所以上式的最后一步成立则

$ \begin{aligned} & \mathbb{E}_{-i}\left[\left|w_{t, i}(\boldsymbol{\sigma})-w_{t+1, i}(\boldsymbol{\sigma})\right|\right] \leqslant \\ & \frac{1}{9 d} \mathbb{E}{ }_{-i}\left[\left\|w_t(\boldsymbol{\sigma})-w_{t+1}(\boldsymbol{\sigma})\right\|_1\right]+ \\ & \frac{1000}{9} \eta L_t d D+\frac{\mathbb{E}_{-i}[\gamma(\boldsymbol{\sigma})]}{30 L_t d}+\frac{10}{9} \beta_{。} \end{aligned} $

由于上述界对任意{σj}ji均成立,因此可得无条件期望的界,

$ \begin{aligned} & \mathbb{E}_{-i}\left[\left|\boldsymbol{w}_{t, i}(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1, i}(\boldsymbol{\sigma})\right|\right] \leqslant \\ & \frac{1}{9 d} \mathbb{E}{ }_{-i}\left[\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_1\right]+ \\ & \frac{1000}{9} \eta L_t d D+\frac{\mathbb{E}[\gamma(\boldsymbol{\sigma})]}{30 L_t d}+\frac{10}{9} \beta_{。} \end{aligned} $ (10)

将式(10)代入到式(8)中,可得稳定性界,

$ \begin{aligned} & \mathbb{E}\left[\left\|\boldsymbol{w}_t(\boldsymbol{\sigma})-\boldsymbol{w}_{t+1}(\boldsymbol{\sigma})\right\|_1\right] \leqslant \\ & 125 \eta L_t d^2 D+\frac{\beta d}{20 \eta L_t}+2 \beta d+\frac{\alpha}{20 L_t}, \end{aligned} $ (11)

将式(11)代入式(2)中,得证。

由于定理4的假设是学习者可接收(α,β)-近似优化神谕、损失函数是可预测的且是Lt -Lipschitz连续的,因此当$ \eta=\frac{d}{d^{3 / 2} T^{1 / 2}-d}$,定理4可得遗憾界$ O\left(d^{3 / 2} T^{-3 / 2}+\alpha+\beta d^{3 / 2} T^{-1 / 2}\right)$。若取α=O(T-3/2),β=O(T3),可得最佳的遗憾界O(T-3/2)。

5 结语

本文通过引入随机噪声提出了广义在线点对学习框架。基于文献[26]提出的近似神谕的近似最优假设,提出了损失是“可预测的”非凸广义在线点对学习算法。进一步,对广义在线点对学习进行泛化性研究,通过稳定性分析,得到“可预测的”广义在线点对学习的最佳遗憾界O(T-3/2)。

基于在线梯度下降算法,可进一步研究具有“可预测的”非凸损失函数的在线点对学习的遗憾界。此外,本文中的遗憾界和平均稳定性结果是建立在期望意义下的,而如何得到高概率的界也是未来可进行的工作。

参考文献
[1]
原晋江, 农庆琴. 平行批排序最小化最大完工时间在线算法的一个注记[J]. 郑州大学学报(理学版), 2006, 38(3): 1-3.
YUAN J J, NONG Q Q. A note on an on-line algorithmfor the parallel-batching scheduling to minimize makespan[J]. Journal of Zhengzhou university (natural scienceedition), 2006, 38(3): 1-3. (0)
[2]
李文杰, 马冉. 最大化接收工件个数的在线分批排序问题研究[J]. 郑州大学学报(理学版), 2016, 48(2): 24-28.
LI W J, MA R. Research on the online batch scheduling to maximize total number of the accepted jobs[J]. Journal of Zhengzhou university (natural science edition), 2016, 48(2): 24-28. DOI:10.13705/j.issn.1671-6841.2015216 (0)
[3]
REJCHEL W. On ranking and generalization bounds[J]. Journal of machine learning research, 2012, 13: 1373-1392. (0)
[4]
WEINBERGER K Q, SAUL L K. Distance metric learning for large margin nearest neighbor classification[J]. Journal of machine learning research, 2009, 10: 207-244. (0)
[5]
WANG B, KHARDON R, PECHYONY D, et al. Generalization bounds for online learning algorithms with pairwise loss functions[C]//Annual Conference Computational Learning Theory. Berlin: Springer Press, 2012: 1-22. (0)
[6]
WANG Y Y, KHARDON R, PECHYONY D, et al. Online learning with pairwise loss functions[J]. Journal of machine learning research, 2013, 14: 1-37. (0)
[7]
KAR P, SRIPERUMBUDUR B K, JAIN P, et al. On the generalization ability of online learning algorithms for pairwise loss functions[EB/OL]. (2013-05-11)[2023-07-22]. https://api.semanticscholar.org/CorpusID:8366093. (0)
[8]
YING Y M, ZHOU D X. Unregularized online learning algorithms with general loss functions[J]. Applied and computational harmonic analysis, 2017, 42(2): 224-244. DOI:10.1016/j.acha.2015.08.007 (0)
[9]
YING Y M, ZHOU D X. Online pairwise learning algorithms[J]. Neural computation, 2016, 28(4): 743-777. DOI:10.1162/NECO_a_00817 (0)
[10]
CHEN X M, LEI Y W. Refined bounds for online pairwise learning algorithms[J]. Neurocomputing, 2018, 275: 2656-2665. (0)
[11]
GUO Z C, YING Y M, ZHOU D X. Online regularized learning with pairwise loss functions[J]. Advances in computational mathematics, 2017, 43(1): 127-150. (0)
[12]
WANG C, HU T. Online regularized pairwise learning with least squares loss[J]. Analysis and applications, 2020, 18(1): 49-78. (0)
[13]
HU T, FAN J, XIANG D H. Convergence analysis of distributed multi-penalty regularized pairwise learning[J]. Analysis and applications, 2020, 18(1): 109-127. (0)
[14]
BOUSQUET O, ELISSEEFF A. Stability and generalization[J]. Journal of machine learning research, 2002, 2: 499-526. (0)
[15]
ELISSEEFF A, EVGENIOU T, PONTIL M. Stability of randomized learning algorithms[J]. Journal of machine learning research, 2005, 6: 55-79. (0)
[16]
SHEN W, YANG Z H, YING Y M, et al. Stability and optimization error of stochastic gradient descent for pairwise learning[J]. Analysis and applications, 2020, 18(5): 887-927. (0)
[17]
LEI Y W, LEDENT A, KLOFT M. Sharper generalization bounds for pairwise learning[J]. Advances in neural information processing systems, 2020, 33: 21236-21246. (0)
[18]
HAZAN E, KALE S. Extracting certainty from uncertainty: regret bounded byvariation incosts[J]. Machine learning, 2010, 80(2/3): 165-188. (0)
[19]
CHIANG C K, YANG T B, LEE C J, et al. Online optimization with gradual variations[C]//Conference on Learning Theory. Berlin: Springer Press, 2012: 1-20. (0)
[20]
RAKHLIN A, SRIDHARAN K. Online learning with predictable sequences[EB/OL]. (2012-08-18)[2023-07-23]. https://arxiv.org/abs/1208.3728.pdf. (0)
[21]
MCMAHAN B. Follow-the-regularized-leader and mirror descent: equivalence theorems and l1 regularization[C]//Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. Cambridge: MIT Press, 2011: 525-533. (0)
[22]
AGARWAL N, GONEN A, HAZAN E. Learning in nonconvex games with an optimization oracle[EB/OL]. (2018-10-17)[2023-07-23]. https://arxiv.org/abs/1810.07362.pdf. (0)
[23]
FINE M. Certifiably accurate private data release with generative adversarial networks[D]. Boston: Harvard University, 2020. (0)
[24]
GRNAROVA P, LEVY K Y, LUCCHI A, et al. An online learning approach to generative adversarial networks[EB/OL]. (2017-06-10)[2023-07-22]. https://arxiv.org/pdf/1706.03269. (0)
[25]
ROSS S, BAGNELL J A. Stability conditions for online learnability[EB/OL]. (2011-08-16)[2023-07-22]. https://arxiv.org/pdf/1108.3154. (0)
[26]
SUGGALA A S, NETRAPALLI P. Online non-convex learning: following the perturbed leader is optimal[EB/OL]. (2019-03-19)[2023-07-22]. https://arxiv.org/abs/1903.08110.pdf. (0)