×

FPGA Implementations of Neural

消耗积分:5 | 格式:rar | 大小:4198 | 2009-07-23

王超

分享资料个

In the 1980s and early 1990s, a great deal of research effort (both industrial
and academic) was expended on the design and implementation of hardware
neurocomputers [5, 6, 7, 8]. But, on the whole, most efforts may be judged
to have been unsuccessful: at no time have have hardware neurocomputers
been in wide use; indeed, the entire field was largely moribund by the end the
1990s. This lack of success may be largely attributed to the fact that earlier
work was almost entirely based on ASIC technology but was never sufficiently
developed or competetive enough to justify large-scale adoption; gate-arrays
of the period mentioned were never large enough nor fast enough for serious
neural-network applications.1 Nevertheless, the current literature shows that
ASIC neurocomputers appear to be making some sort of a comeback [1, 2, 3];
we shall argue below that these efforts are destined to fail for exactly the same
reasons that earlier ones did. On the other hand, the capacity and performance
of current FPGAs are such that they present a much more realistic alternative.
We shall in what follows give more detailed arguments to support these claims.
The chapter is organized as follows. Section 2 is a review of the fundamentals
of neural networks; still, it is expected that most readers of the book will already be familiar with these. Section 3 briefly contrasts ASIC-neurocomputers
with FPGA-neurocomputers, with the aim of presenting a clear case for the
former; a more significant aspects of this argument will be found in [18]. One
of the most repeated arguments for implementing neural networks in hardware
is the parallelism that the underlying models possess. Section 4 is a short section
that reviews this. In Section 5 we briefly describe the realization of a
state-of-the art FPGA device. The objective there is to be able to put into a
concrete context certain following discussions and to be able to give grounded
discussions of what can or cannot be achieved with current FPGAs. Section
6 deals with certain aspects of computer arithmetic that are relevant to neuralnetwork implementations. Much of this is straightforward, and our main aim
is to highlight certain subtle aspects. Section 7 nominally deals with activation
functions, but is actually mostly devoted to the sigmoid function. There
are two main reasons for this choice: first, the chapter contains a significant
contribution to the implementation of elementary or near-elementary activation
functions, the nature of which contribution is not limited to the sigmoid
function; second, the sigmoid function is the most important activation function
for neural networks. In Section 8, we very briefly address an important
issue — performance evaluation. Our goal here is simple and can be stated
quite succintly: as far as performance-evaluation goes, neurocomputer architecture
continues to languish in the “Dark Ages", and this needs to change. A
final section summarises the main points made in chapter and also serves as a
brief introduction to subsequent chapters in the book.

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

评论(0)
发评论

下载排行榜

全部0条评论

快来发表一下你的评论吧 !