Archive

Archive for the ‘Emerging Topics’ Category

Matrix here

March 20th, 2015 Comments off

IEEE Spectrum introduces how neural circuitry implanted in our brain can do life-saving work. Though the article doesn’t give out much info about the kind of neural circuity implanted, it does mention microprocessor and battery. So it looks like a really low power microprocessor running some kind of algorithm similar to neural net. The article suggests that learning takes place offline, but is silent on how the parameter update can be done or how easy/unintrusive it is.

Either way it shows another very compelling application for embedded systems capable of machine learning tasks.

Bionic woman, but the striking similarity with Matrix is undeniable. (via IEEE)

Categories: Emerging Topics Tags:

HPC and the Excluded Middle | blog@CACM | Communications of the ACM

November 23rd, 2010 Comments off

HPC and the Excluded Middle

By Daniel Reed October 24, 2010

I have repeatedly been told by both business leaders and academic researchers that they want “turnkey” HPC solutions that have the simplicity of desktop tools but the power of massively parallel computing. Such desktop tools would allow non-experts to create complex models quickly and easily, evaluate those models in parallel, and correlate the results with experimental and observational data. Unlike ultra-high-performance computing, this is about maximizing human productivity rather than obtaining the largest fraction of possible HPC platform performance. Most often, users will trade hardware performance for simplicity and convenience. This is an opportunity and a challenge, an opportunity to create domain-specific tools with high expressivity and a challenge to translate the output of those tools into efficient, parallel computations.

via HPC and the Excluded Middle | blog@CACM | Communications of the ACM.

Categories: Emerging Topics Tags:

HTM (Hierarchical Temporal Memory)

September 2nd, 2010 Comments off

HTM(Hierarchical Temporal Memory) : Not programmed & not different algorithms for different problem.

1) Discover cause

– Find relationships at inputs.

– Possible cause is called “belief”.

2) Infer causes of novel input

– Inference : Similar to pattern recognition

– Ambiguous -> Flat.

– HTMs handle novel input both during inference & training

3) Make predictions

– Each node store sequences of patterns

+ current input -> Predict what would happen next.

4) Direct behavior : Interact with world.

How do HTMs discover and infer causes?

Why is a hierarchy important?

1) Shared representations lead to generalization and storage efficiency.

2) The hierarchy of HTM matches the spatial and temporal hierarchy of the real world.

3) Belief propagation ensures all nodes quickly reach the best mutually compatible beliefs.

– Belief propagation calculates the marginal distribution for

each unobserved node, conditional on any observed nodes.

4) Hierarchical representation affords mechanism for attention

How does each node discover and infer causes?

Assigning causes.

Most common sequence of pattern are assigned.

Assigned causes are used for prediction, behavior etc.

Why is time necessary to learn?

•Pooling(many-one)  method

– Overlap

Several images of watermelons are overlapped in one picture

Learning of sequence : HTM uses this way.

4 pictures are stored sequentially

– Reference

Hierarchical Temporal Memory – Concepts, Theory, and Terminology by Jeff Hawkins and Dileep George, Numenta Inc.

Emulation Engine for Spiking Neurons and Adaptive Synaptic Weights

August 31st, 2010 Comments off

PCNN (Pulse-Coded Neural Networks) : A modeled network system which is for the evaluation of a biology-oriented image processing, usually performed on general-purpose computers, e. g. PCs or workstations.
SNNs(Spiking Neural Networks) : A neural network model. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model.

SEE(Spiking Neural Network Emulation Engine) : A field-programmable gate array(FPAG) based emulation engine for spiking neurons and adaptive synaptic weights is presented, that tackles bottle-neck problem by providing a distributed memory architecture and a high bandwidth to the weight memory.

PCNN – Operated by PC & workstation -> Time consuming
– Because of bottle-neck : sequential access weight memory

FPGA SEE – Distribute memory
– High bandwidth weight memory
– separating calculation neuron states & network topology

SNNs or PCNNs 1. Reproduce spike or pulse.
2. Perform some problems such as vision tasks.

Problem of Large PCNNs 1. Calculation steps.
2. Communication resources.
3. Load balancing.
4. Storage capacity.
5. Memory bandwidth.

Spiking neuron model with adaptive synapses.

Non-leaky integrate-and-fire neuron(IFN)

,    ,  

Overview of the SEE architecture

Overview of SEE architecture

A. Simulation control(PPC2) 1. Configuration of network

2. Monitoring of network parameter.

3. Administration of event-list.

– Two event-lists  :  DEL(Dynamic Event-List) includes all excited neurons that receive

a spike or an external input current.

FEL(Fire Event-List) stores all firing neurons that are in a spike

sending state and the corresponding time values when the

neuron enters the spike receiving state again.

B. Network Topology Computation(NTC)

– Topology-vector-phase : The presynaptic activity is determined for each excited neuron.

– Topology-update-phase : The tag-fields are updated according to occurred spike start-events or spike stop-events.

C. Neuron State Computation(NSC)

– Neuron-spike-phase : It is determined if before the next spike stop-event an excited

neuron will start to fire.

– Neuron –update-phase

– Bulirsch_Stoer method of integration.(MMID, PZEXTR)

– Modified-midpoint integration(MMID)

– Polynomial extrapolation(PZEXTR)

PCB of spiking neural network emulation engine

Performance analysis

n NNEURON NBSSTEP TSW TSEE FSPEED-UP
4 32X32 98717 1405 s 45 s 31.2
48X48 222365 6527 s 226 s 28.9
64X64 420299 22620 s 758 s 29.8
80X80 721463 65277 s 2032 s 32.1
96X96 926458 119109 s 3757 s 31.7
8 32X32 107276 1990 s 63 s 31.6
48X48 235863 7263 s 312 s 23.3
64X64 413861 31548 s 972 s 32.5
80X80 645694 80378 s 2370 s 33.9
96X96 967572 142834 s 5113 s 29.9

– Reference

Emulation Engine for Spiking Neurons and Adaptive Synaptic Weights by H. H. Hellmich, M. Geike, P. Griep, P. Mahr, M. Rafanelli and H. Klar.

Categories: Emerging Topics, Review Tags: ,