story I'm thinking of, about AI / ML Cirtcuit optimisation

story I'm thinking of, about AI / ML Cirtcuit optimisation

Author
Discussion

P924

Original Poster:

1,272 posts

189 months

Wednesday 11th July 2018
quotequote all
Hi,

I'm trying to find something I read a while back. basically...

a couple of engineers, develop a robot (?) to optimise a circuit design, a physical thing.
said robot keeps removing components, and the circuit continues to work, reducing from something like 30 to 10.
Engineers are unable to recreate circuit themselves, yet the robot version continues to work.
it seems at some point, one component, is damaged/changed, allowing the robot to use fair less components.


Does anyone have any idea of what I'm thinking of?

Thanks,

P924

Original Poster:

1,272 posts

189 months

Beati Dogu

9,193 posts

146 months

Thursday 12th July 2018
quotequote all

FarmyardPants

4,173 posts

225 months

Monday 16th July 2018
quotequote all
Very interesting article, thanks for posting it. This is related to a post I made in an AI thread on here, the gist being that by an iterative process of randomly modifying algorithms and selecting the best-performing ones as the basis for the next iteration, it should be possible to 'grow' an algorithm that achieves a particular goal without knowledge of how said algorithm works. This is a more general, albeit nebulous concept than the more conventional one of training neural nets, where the parameters (weights of each node) are more tightly constrained. There have been cases of this reported (chat bots etc) but there is still much to be explored in this area IMO.

To go off topic re the AI thing:

I believe there is a limit to the complexity of algorithm that we can write/understand - we cannot program a "brain" because to do so implies that we must first have the mental capacity to understand how a brain works, which I propose is not possible. My thoughts and reasoning are a product of my mind, so it's not unreasonable to suggest that the complexity of what my mind is capable of producing is <= the complexity of my mind itself. To suggest otherwise (ie a ">" relationship) implies, to me at least, a greater capacity for understanding (more synapses, or whatever) than the thing being comprehended, which is a contradiction. But this limit does not apply if we write an algorithm that can evolve/write another. In other words, we can design a process of algorithm evolution which results in something that we are not (by my "law") able to understand. And that's how Skynet started smile