郑逢德, 张鸿宾. 在线Lagrangian支撑向量回归[J]. 北京工业大学学报, 2013, 39(7): 1065-1071.
    引用本文: 郑逢德, 张鸿宾. 在线Lagrangian支撑向量回归[J]. 北京工业大学学报, 2013, 39(7): 1065-1071.
    ZHENG Feng-de, ZHANG Hong-bin. Online Lagrangian Support Vector Regression[J]. Journal of Beijing University of Technology, 2013, 39(7): 1065-1071.
    Citation: ZHENG Feng-de, ZHANG Hong-bin. Online Lagrangian Support Vector Regression[J]. Journal of Beijing University of Technology, 2013, 39(7): 1065-1071.

    在线Lagrangian支撑向量回归

    Online Lagrangian Support Vector Regression

    • 摘要: 为快速求解在线支撑向量回归算法,给出了一种基于Lagrangian支撑向量回归(LSVR)的在线增量学习算法.LSVR得到的无约束最优化问题可以采用快速迭代算法求解,该迭代算法可以从任何初始点收敛.LSVR求解时,在迭代开始只需要对阶数为输入样本数加一的矩阵求逆.在线增量LSVR学习算法在线性情况下采用S-M-W公式可以明显减少运算时间,在非线性情况下矩阵求逆充分利用了历史学习结果,减少了很多重复计算.通过在多个数据集上进行对比,实验结果表明:该算法与以前算法相比不仅保持了较好的精度,同时训练时间大大减少.

       

      Abstract: To quickly solve online support vector regression algorithm, an algorithm of online incremental learning which based on Lagrangian support vector regression (LSVR) was proposed. Unconstrained optimization problems of LSVR could be solved by a rapid iterative algorithm. It could converge from any starting point. LSVR had the advantage that its solution was obtained by taking the inverse of a matrix of order equaled to the number of input samples plus one at the beginning of the iteration. The incremental learning algorithm of LSVR was presented by using the S-M-W identity to reduce the computing time in the linear case. As for the nonlinear case, the inversion of the matrix after increment was solved based on the previous computed information and it was unnecessary to repeat the computing process. The effectiveness of the proposed method was illustrated with several data sets. Results show that the proposed algorithm compared with the previous algorithm not only maintains better accuracy, but also greatly shortens the training time.

       

    /

    返回文章
    返回