Subir material

Suba sus trabajos a SEDICI, para mejorar notoriamente su visibilidad e impacto

 

Mostrar el registro sencillo del ítem

dc.date.accessioned 2012-11-28T19:08:11Z
dc.date.available 2012-11-28T19:08:11Z
dc.date.issued 1998-11
dc.identifier.uri http://sedici.unlp.edu.ar/handle/10915/24825
dc.description.abstract Features such as fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropriate tools for Intelligent Computer Systems. A neural network is, by itself, an inherently parallel system where many, extremely simple, processing units work simultaneously in the same problem building up a computational device which possess adaptation (learning) and generalisation (recognition) abilities. Implementation of neural networks roughly involve at least three stages; design, training and testing. The second, being CPU intensive, is the one requiring most of the processing resources and depending on size and structure complexity the learning process can be extremely long. Thus, great effort has been done to develop parallel implementations intended for a reduction of learning time. Pattern partitioning is an approach to parallelise neural networks where the whole net is replicated in different processors and the weight changes owing to diverse training patterns are parallelised. This approach is the most suitable for a distributed architecture such as the one considered here. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system performance facilitating further dynamic load balancing. A Neural Network Device inserted into the kernel of a distributed system as an intelligent tool, allows to achieve automatic allocation of execution requests under some predefined performance criteria based on resource availability and incoming process requirements. This paper being, a twofold proposal, shows firstly, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device and secondly a distributed implementation to provide parallel learning of neural networks using a pattern partitioning approach. In the latter case, some performance results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities and speed-up when using a socket interface or PVM. en
dc.language en es
dc.subject Distributed Systems es
dc.subject distributed systems workload en
dc.subject parallelised neural networks en
dc.subject System architectures es
dc.subject Neural nets es
dc.subject backpropagation en
dc.subject PATTERN RECOGNITION es
dc.subject partitioning schemes en
dc.subject pattern partitioning en
dc.subject system architecture es
dc.title Parallel backpropagation neural networks forTask allocation by means of PVM en
dc.type Objeto de conferencia es
sedici.creator.person Crespo, María Liz es
sedici.creator.person Printista, Alicia Marcela es
sedici.creator.person Piccoli, María Fabiana es
sedici.description.note Sistemas Inteligentes es
sedici.subject.materias Ciencias Informáticas es
sedici.subject.materias Informática es
sedici.description.fulltext true es
mods.originInfo.place Red de Universidades con Carreras en Informática (RedUNCI) es
sedici.subtype Objeto de conferencia es
sedici.rights.license Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
sedici.rights.uri http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
sedici.date.exposure 1998-10
sedici.relation.event IV Congreso Argentina de Ciencias de la Computación es
sedici.description.peerReview peer-review es


Descargar archivos

Este ítem aparece en la(s) siguiente(s) colección(ones)

Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5) Excepto donde se diga explícitamente, este item se publica bajo la siguiente licencia Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)