عنوان مقاله

SVM چندخطی چند مرتبه ای برای دسته بندی داده های ماتریس



خرید نسخه پاورپوینت این مقاله


خرید نسخه ورد این مقاله



 

فهرست مطالب

مقدمه

علامت گذاری ها

SVM چند خطی چند مرتبه ای

تحلیل عملکرد

آزمایشات

نتیجه گیری




بخشی از مقاله

از طرف دیگر، در مقایسه باSTM ، SVM دارای درجه آزادی بزرگتری در انتخابw  امکان پذیر می باشد زیرابهmn  عنصر در ماتریس به طور مستقل توجه می کند. با این حال، با داده های برداری سنتی و داده های ماتریس برداری شده به صورت مساوی رفتار می کند. در مقایسه با داده های برداری سنتی، داده های ماتریس نیز دارای مقداری وابستگی فضایی می باشند.






خرید نسخه پاورپوینت این مقاله


خرید نسخه ورد این مقاله



 

کلمات کلیدی: 

Multiple rank multi-linear SVM for matrix data classification Chenping Hou a,n , Feiping Nie b , Changshui Zhang c , Dongyun Yi a , Yi Wu a a Department of Mathematics and Systems Science, National University of Defense Technology, Changsha 410073, China b Department of Computer Science and Engineering, University of Texas, Arlington 76019, USA c Department of Automation, Tsinghua University, Beijing 100084, China article info Article history: Received 23 August 2012 Received in revised form 5 June 2013 Accepted 4 July 2013 Available online 15 July 2013 Keywords: Pattern recognition Matrix data classification Learning capacity Generalization SVM STM abstract Matrices, or more generally, multi-way arrays (tensors) are common forms of data that are encountered in a wide range of real applications. How to classify this kind of data is an important research topic for both pattern recognition and machine learning. In this paper, by analyzing the relationship between two famous traditional classification approaches, i.e., SVM and STM, a novel tensor-based method, i.e., multiple rank multi-linear SVM (MRMLSVM), is proposed. Different from traditional vector-based and tensor based methods, multiple-rank left and right projecting vectors are employed to construct decision boundary and establish margin function. We reveal that the rank of transformation can be regarded as a tradeoff parameter to balance the capacity of learning and generalization in essence. We also proposed an effective approach to solve the proposed non-convex optimization problem. The convergence behavior, initialization, computational complexity and parameter determination problems are analyzed. Compared with vector-based classification methods, MRMLSVM achieves higher accuracy and has lower computational complexity. Compared with traditional supervised tensor-based methods, MRMLSVM performs better for matrix data classification. Promising experimental results on various kinds of data sets are provided to show the effectiveness of our method. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction Matrices, or more generally, multi-way arrays (tensors) are common forms of data that are encountered in a wide range of real applications. For example, all raster images are essentially digital readings of a grid of sensors and matrix analysis is widely applied in image processing, e.g., photorealistic images of faces [1] and palms [2], and medical images [3]. In web search, one can easily get a large volume of images represented in the form of matrix. Besides, in video data mining, the data at each time frame is also a matrix. Therefore, matrix data analysis, in particular, classification, has become one of the most important topics for both pattern recognition and computer vision. Classification is arguably the most often task in pattern recognition and relevant techniques are abundant in the literature. Standard methods, such as K-nearest neighborhoods classifier (KNN) [4], support vector machine (SVM) [5,6] and onedimensional regression methods [7] are widely used in many fields. Among these approaches, some of them are similarity based, such as KNN, and some of them are margin based, such as SVM. Due to its practical effectiveness and theoretical soundness, SVM and its variants, especially linear SVM, have been widely used in many real applications [8,9]. For example, SVM has been combined with factorization machine (FM) [10] for spammer discovering in social networks [11]. Nevertheless, traditional classification methods are usually vector-based. They assume that the inputs of an algorithm are vectors, not matrix data or tensor data. When they are applied to matrix data, the matrix structure must be collapsed to make vector inputs for the algorithm. One common way is connecting each row (or column) of a matrix to reformulate a vector. Although traditional classification approaches have achieved satisfactory performance in many cases, they may lack efficiency in managing matrix data by simply reformulating them into vectors. The main reasons are as follows: (1) When we reformulate a matrix as a vector, the dimensionality of this vector is often very high. For example, for a small image of resolution 256 256, the dimensionality of reformulated vector is 65,536. The performances of traditional vector based methods will degrade due to the increase of dimensionality [12,13]. (2) With the increase of dimensionality, the computation time will increase drastically. For example, the computational complexity of SVM is closely related to the dimensionality of input data [9]. If the matrix scale is large, traditional approaches cannot be implemented in this scenario. (3) When a matrix is collapsed as a vector, the spatial correlations of the matrix will be lost [14]. For example, if an image of m n is represented as a mn-dimensional vector, it suggests that the image is specified by mn independent variables.