• English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • Deutsch 
    • English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • Einloggen
Dokumentanzeige 
  •   DSpace Startseite
  • OAI Data Pool
  • OAI Harvested Content
  • Dokumentanzeige
  •   DSpace Startseite
  • OAI Data Pool
  • OAI Harvested Content
  • Dokumentanzeige
JavaScript is disabled for your browser. Some features of this site may not work without it.

Stöbern

Gesamter BestandBereiche & SammlungenErscheinungsdatumTitelnSchlagwortenAutorenDiese SammlungErscheinungsdatumTitelnSchlagwortenAutorenProfilesView

Mein Benutzerkonto

Einloggen

The Library

AboutNew SubmissionSubmission GuideSearch GuideRepository PolicyContact

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors

Workshop Title Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions

  • CSV
  • RefMan
  • EndNote
  • BibTex
  • RefWorks
Author(s)
Yuyang Wang
Roni Khardon
Dmitry Pechyony
Rosie Jones
Contributor(s)
The Pennsylvania State University CiteSeerX Archives
Keywords
Generalization bounds
Pairwise loss functions
Online learning
Loss bounds

Full record
Zur Langanzeige
URI
http://hdl.handle.net/20.500.12424/794419
Online Access
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.297.4676
http://www.cs.technion.ac.il/~pechyony/oam_colt_final.pdf
Abstract
Editor: Editor’s name Efficient online learning with pairwise loss functions is a crucial component in building largescale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a pointwise loss can not be directly applied to pairwise losses. Using the Hoeffding-Azuma inequality and various proof techniques for the risk bounds in the batch learning, we derive data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. In addition, we analyze a natural extension of the perceptron algorithm for the bipartite ranking problem providing a bound on the empirical pairwise loss. Combining these results we get a complete risk analysis of the proposed algorithm.
Date
2013-07-17
Type
text
Identifier
oai:CiteSeerX.psu:10.1.1.297.4676
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.297.4676
Copyright/License
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Collections
OAI Harvested Content

entitlement

 
DSpace software (copyright © 2002 - 2023)  DuraSpace
Quick Guide | Kontakt
Open Repository is a service operated by 
Atmire NV
 

Export search results

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.