• English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • English 
    • English
    • français
    • Deutsch
    • español
    • português (Brasil)
    • Bahasa Indonesia
    • русский
    • العربية
    • 中文
  • Login
View Item 
  •   Home
  • OAI Data Pool
  • OAI Harvested Content
  • View Item
  •   Home
  • OAI Data Pool
  • OAI Harvested Content
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

All of the LibraryCommunitiesPublication DateTitlesSubjectsAuthorsThis CollectionPublication DateTitlesSubjectsAuthorsProfilesView

My Account

LoginRegister

The Library

AboutNew SubmissionSubmission GuideSearch GuideRepository PolicyContact

Visual feature learning

  • CSV
  • RefMan
  • EndNote
  • BibTex
  • RefWorks
Author(s)
Piater, Justus H
Keywords
Computer Science

Full record
Show full item record
URI
http://hdl.handle.net/20.500.12424/842561
Online Access
http://scholarworks.umass.edu/dissertations/AAI3000331
Abstract
Humans learn robust and efficient strategies for visual tasks through interaction with their environment. In contrast, most current computer vision systems have no such learning capabilities. Motivated by insights from psychology and neurobiology, I combine machine learning and computer vision techniques to develop algorithms for visual learning in open-ended tasks. Learning is incremental and makes only weak assumptions about the task environment. ^ I begin by introducing an infinite feature space that contains combinations of local edge and texture signatures not unlike those represented in the human visual cortex. Such features can express distinctions over a wide range of specificity or generality. The learning objective is to select a small number of highly useful features from this space in a task-driven manner. Features are learned by general-to-specific random sampling. This is illustrated on two different tasks, for which I give very similar learning algorithms based on the same principles and the same feature space. ^ The first system incrementally learns to discriminate visual scenes. Whenever it fails to recognize a scene, new features are sought that improve discrimination. Highly distinctive features are incorporated into dynamically updated Bayesian network classifiers. Even after all recognition errors have been eliminated, the system can continue to learn better features, resembling mechanisms underlying human visual expertise. This tends to improve classification accuracy on independent test images, while reducing the number of features used for recognition. ^ In the second task, the visual system learns to anticipate useful hand configurations for a haptically-guided dextrous robotic grasping system, much like humans do when they pre-shape their hand during a reach. Visual features are learned that correlate reliably with the orientation of the hand. A finger configuration is recommended based on the expected grasp quality achieved by each configuration. ^ The results demonstrate how a largely uncommitted visual system can adapt and specialize to solve particular visual tasks. Such visual learning systems have great potential in application scenarios that are hard to model in advance, e.g. autonomous robots operating in natural environments. Moreover, this dissertation contributes to our understanding of human visual learning by providing a computational model of task-driven development of feature detectors. ^
Date
2001-01-01
Type
text
Identifier
oai:scholarworks.umass.edu:dissertations-3471
http://scholarworks.umass.edu/dissertations/AAI3000331
Collections
OAI Harvested Content

entitlement

 
DSpace software (copyright © 2002 - 2021)  DuraSpace
Quick Guide | Contact Us
Open Repository is a service operated by 
Atmire NV
 

Export search results

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.