dopetalk does not endorse any advertised product nor does it accept any liability for it's use or misuse

This website has run out of funding so feel free to contribute if you can afford it (see footer)

Author Topic: Unreasonable effectiveness of learning neural networks  (Read 1632 times)

Offline Chip (OP)

  • Server Admin
  • Hero Member
  • *****
  • Administrator
  • *****
  • Join Date: Dec 2014
  • Location: Australia
  • Posts: 6509
  • Reputation Power: 0
  • Chip has hidden their reputation power
  • Gender: Male
  • Last Login:October 20, 2021, 05:43:37 AM
  • Deeply Confused Learner
  • Profession: IT Engineer
Unreasonable effectiveness of learning neural networks
« on: May 04, 2018, 12:34:24 PM »
source: http://www.pnas.org/content/pnas/113/48/E7655.full.pdf

Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes [2016}

Introduction

In artificial neural networks, learning from data is a computationally
demanding task in which a large number of connection
weights are iteratively tuned through stochastic-gradient-based
heuristic processes over a cost function. It is not well understood
how learning occurs in these systems, in particular how
they avoid getting trapped in configurations with poor computational
performance. Here, we study the difficult case of networks
with discrete weights, where the optimization landscape is
very rough even for simple architectures, and provide theoretical
and numerical evidence of the existence of rare—but extremely
dense and accessible—regions of configurations in the network
weight space. We define a measure, the robust ensemble (RE),
which suppresses trapping by isolated configurations and ampli-
fies the role of these dense regions. We analytically compute the
RE in some exactly solvable models and also provide a general
algorithmic scheme that is straightforward to implement: define
a cost function given by a sum of a finite number of replicas of
the original cost function, with a constraint centering the replicas
around a driving assignment. To illustrate this, we derive several
powerful algorithms, ranging from Markov Chains to message
passing to gradient descent processes, where the algorithms target
the robust dense states, resulting in substantial improvements
in performance. The weak dependence on the number of
precision bits of the weights leads us to conjecture that very
similar reasoning applies to more conventional neural networks.
Analogous algorithmic schemes can also be applied to other
optimization problems.

Significance

Artificial neural networks are some of the most widely used
tools in data science. Learning is, in principle, a hard problem
in these systems, but in practice heuristic algorithms often
find solutions with good generalization properties. We propose
an explanation of this good performance in terms of a
nonequilibrium statistical physics framework: We show that
there are regions of the optimization landscape that are both
robust and accessible and that their existence is crucial to
achieve good performance on a class of particularly difficult
learning problems. Building on these results, we introduce a
basic algorithmic scheme that improves existing optimization
algorithms and provides a framework for further research on
learning in neural networks.

see the source link for the continuation of the article
friendly
0
funny
0
informative
0
agree
0
disagree
0
like
0
dislike
0
No reactions
No reactions
No reactions
No reactions
No reactions
No reactions
No reactions
I do not condone or support any illegal activities. All information is for theoretical discussion and wonder.
All activities discussed are considered fictional and hypothetical. Information of all discussion has been derived from online research and in the spirit of personal Freedom.

Tags:
 

Related Topics

  Subject / Started by Replies Last post
1 Replies
380 Views
Last post April 05, 2018, 11:38:34 AM
by Mr.pooper
0 Replies
1778 Views
Last post May 02, 2018, 07:27:00 AM
by Chip
0 Replies
1475 Views
Last post May 02, 2018, 07:42:50 AM
by Chip
0 Replies
1409 Views
Last post May 04, 2018, 12:42:06 PM
by Chip
0 Replies
1442 Views
Last post May 04, 2018, 09:57:27 PM
by Chip
0 Replies
1562 Views
Last post May 14, 2018, 09:35:43 PM
by Chip
0 Replies
1799 Views
Last post June 01, 2018, 08:31:44 PM
by Chip
0 Replies
1834 Views
Last post June 02, 2018, 10:10:59 AM
by Chip
0 Replies
1559 Views
Last post September 24, 2018, 01:43:39 PM
by Chip
2 Replies
1241 Views
Last post May 30, 2019, 09:38:29 AM
by Chip


dopetalk does not endorse any advertised product nor does it accept any liability for it's use or misuse





TERMS AND CONDITIONS

In no event will d&u or any person involved in creating, producing, or distributing site information be liable for any direct, indirect, incidental, punitive, special or consequential damages arising out of the use of or inability to use d&u. You agree to indemnify and hold harmless d&u, its domain founders, sponsors, maintainers, server administrators, volunteers and contributors from and against all liability, claims, damages, costs and expenses, including legal fees, that arise directly or indirectly from the use of any part of the d&u site.


TO USE THIS WEBSITE YOU MUST AGREE TO THE TERMS AND CONDITIONS ABOVE


Founded December 2014
SimplePortal 2.3.6 © 2008-2014, SimplePortal