Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Software (Development) 2.0
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Off the Wall
View previous topic :: View next topic  
Author Message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 15, 2017 4:02 am    Post subject: Software (Development) 2.0 Reply with quote

Quote:
The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc.

[...]

In contrast, Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent.
It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.

Software 2.0 is not going to replace 1.0 (indeed, a large amount of 1.0 infrastructure is needed for training and inference to “compile” 2.0 code), but it is going to take over increasingly large portions of what Software 1.0 is responsible for today. Let’s examine some examples of the ongoing transition to make this more concrete:
Software 2.0

Funny. When I took Pascal a long time ago, I decided to not pursue it as a career because reading entrails indicated a lot of the labor would go offshore.

Since I've recently tried to focus more on improving my programming abilities, I've been wondering how the current state of software development along with machine learning is likely to change skillset requirements for "tomorrow's" programmer.

:(
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
The Doctor
Moderator
Moderator


Joined: 27 Jul 2010
Posts: 2474

PostPosted: Wed Nov 15, 2017 5:52 am    Post subject: Reply with quote

There has always been a lot of noise about what the next "big thing" is and it is almost always completely wrong. For example, missiles where supposed to obsolete dogfights. Turns out they didn't. They don't work unless they are fired in the proper envelope. In other words, one must dogfight to get a missile shot. Then of course guns cover a range that missiles just can't so you still need one.

I wouldn't expect "Software 2.0" to be the revolution he thinks it is. It sounds like a niche application that is being over generalized because it applies to his particular problem set.
_________________
First things first, but not necessarily in that order.

Apologies if I take a while to respond. I'm currently working on the dematerialization circuit for my blue box.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 15, 2017 6:06 am    Post subject: Reply with quote

Well, his and anyone with "large enough" data sets sufficient to train the "AI", no?

The next questions are what sized data sets are required, and what does it take to generate a sufficiently sized data set.

After that, it will be a matter of expanding where machine learning can be utilized, improving how machine learning is achieved, and reducing how much data is required to "bootstrap," or eliminate it entirely. Sure, that seems likely to be pretty far off... unless machine learning figures it out first.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1246
Location: Idaho, USA

PostPosted: Wed Nov 15, 2017 6:11 am    Post subject: Reply with quote

Am I supposed to hand this thing inputs and outputs, and have it calculate the most efficient algorithm? If so, it sounds like nothing more than a regression system on steroids.
_________________
Gun: Glock 19 Gen 4
Sights: XS DXT Big Dot
Holster: StealthGear VentCore IWB
Ammunition: Federal Premium HST 124gr
Light: Inforce APLc
Back to top
View user's profile Send private message
richk449
Guru
Guru


Joined: 24 Oct 2003
Posts: 345

PostPosted: Wed Nov 15, 2017 6:12 am    Post subject: Reply with quote

The Doctor wrote:
There has always been a lot of noise about what the next "big thing" is and it is almost always completely wrong. For example, missiles where supposed to obsolete dogfights. Turns out they didn't. They don't work unless they are fired in the proper envelope. In other words, one must dogfight to get a missile shot. Then of course guns cover a range that missiles just can't so you still need one.

When was the last time a dogfight occurred?
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1246
Location: Idaho, USA

PostPosted: Wed Nov 15, 2017 6:12 am    Post subject: Reply with quote

richk449 wrote:
The Doctor wrote:
There has always been a lot of noise about what the next "big thing" is and it is almost always completely wrong. For example, missiles where supposed to obsolete dogfights. Turns out they didn't. They don't work unless they are fired in the proper envelope. In other words, one must dogfight to get a missile shot. Then of course guns cover a range that missiles just can't so you still need one.

When was the last time a dogfight occurred?

When was the last time the U.S. fought a competent air force?
_________________
Gun: Glock 19 Gen 4
Sights: XS DXT Big Dot
Holster: StealthGear VentCore IWB
Ammunition: Federal Premium HST 124gr
Light: Inforce APLc
Back to top
View user's profile Send private message
richk449
Guru
Guru


Joined: 24 Oct 2003
Posts: 345

PostPosted: Wed Nov 15, 2017 6:15 am    Post subject: Reply with quote

wswartzendruber wrote:
Am I supposed to hand this thing inputs and outputs, and have it calculate the most efficient algorithm? If so, it sounds like nothing more than a regression system on steroids.

I think that one can think of a neural network as a generalization of a linear regression model.
Back to top
View user's profile Send private message
The Doctor
Moderator
Moderator


Joined: 27 Jul 2010
Posts: 2474

PostPosted: Wed Nov 15, 2017 6:24 am    Post subject: Reply with quote

richk449 wrote:
The Doctor wrote:
There has always been a lot of noise about what the next "big thing" is and it is almost always completely wrong. For example, missiles where supposed to obsolete dogfights. Turns out they didn't. They don't work unless they are fired in the proper envelope. In other words, one must dogfight to get a missile shot. Then of course guns cover a range that missiles just can't so you still need one.

When was the last time a dogfight occurred?
The last time the Air Force was called into an air superiority battle. The Desert Storm saw F-15s tangling with Mig-29s in old fashioned dogfights. Iraqis refused to engage in Desert Storm so 2003 was a turkey shoot.

Vietnam was supposed to be the end of dog fighting but the result was pilots begging for guns and ACM schools to be reopened (ACM = dogfights). It also saw many pilots killed because they where not given the tools they needed.

Anyway, back on topic I'd love to see how one constructs a data set to program a kernel, wm, or minecraft. Until they can do that its not really a general tool.
_________________
First things first, but not necessarily in that order.

Apologies if I take a while to respond. I'm currently working on the dematerialization circuit for my blue box.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1246
Location: Idaho, USA

PostPosted: Wed Nov 15, 2017 6:26 am    Post subject: Reply with quote

I'm thinking this is more for DSP algorithms.
_________________
Gun: Glock 19 Gen 4
Sights: XS DXT Big Dot
Holster: StealthGear VentCore IWB
Ammunition: Federal Premium HST 124gr
Light: Inforce APLc
Back to top
View user's profile Send private message
R0b0t1
Apprentice
Apprentice


Joined: 05 Jun 2008
Posts: 255

PostPosted: Wed Nov 15, 2017 6:44 am    Post subject: Reply with quote

There will likely always be a place for the design of deterministic behavior which is a poor fit for neural networks, at least until neural networks achieve sentience. But then, what those algorithms will be doing is designing deterministic algorithms that would be a poor fit for neural networks.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 458
Location: NRW, Germany

PostPosted: Wed Nov 15, 2017 11:43 am    Post subject: Reply with quote

Just what the world has been waiting for: Software that you cannot debug.
Back to top
View user's profile Send private message
flysideways
Apprentice
Apprentice


Joined: 29 Jan 2005
Posts: 151

PostPosted: Wed Nov 15, 2017 12:54 pm    Post subject: Reply with quote

richk449 wrote:
The Doctor wrote:
There has always been a lot of noise about what the next "big thing" is and it is almost always completely wrong. For example, missiles where supposed to obsolete dogfights. Turns out they didn't. They don't work unless they are fired in the proper envelope. In other words, one must dogfight to get a missile shot. Then of course guns cover a range that missiles just can't so you still need one.

When was the last time a dogfight occurred?


US? June 18, 2017

Turkey? November 2015

Unknown? July 17, 2014
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 15, 2017 5:22 pm    Post subject: Reply with quote

The Doctor wrote:
I'd love to see how one constructs a data set to program a kernel, wm, or minecraft. Until they can do that its not really a general tool.
I don't think anyone has claimed it to be a general tool (yet). The article mentioned several domains where it was used. My guess is that list will grow.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 15, 2017 5:24 pm    Post subject: Reply with quote

Dr.Willy wrote:
Just what the world has been waiting for: Software that you cannot debug.
Someday there may be a ConvNet for that. Mmmm, buzzwordy.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
erm67
Apprentice
Apprentice


Joined: 01 Nov 2005
Posts: 230
Location: Where the black men cannot enter

PostPosted: Wed Nov 15, 2017 7:20 pm    Post subject: Re: Software (Development) 2.0 Reply with quote

pjp wrote:


Funny. When I took Pascal a long time ago, I decided to not pursue it as a career because reading entrails indicated a lot of the labor would go offshore.

Since I've recently tried to focus more on improving my programming abilities, I've been wondering how the current state of software development along with machine learning is likely to change skillset requirements for "tomorrow's" programmer.


I imagine it must have been hard to learn that everything will now go into the cloud, sysadm labor disappear, and devops must actually program ...... I hope you're not into horse race bets .....
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
A posse ad esse non valet consequentia
Πάντα ῥεῖ
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 15, 2017 11:02 pm    Post subject: Reply with quote

The cloud remains cost prohibitive for many uses.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
erm67
Apprentice
Apprentice


Joined: 01 Nov 2005
Posts: 230
Location: Where the black men cannot enter

PostPosted: Thu Nov 16, 2017 8:13 am    Post subject: Reply with quote

System programming labor never went abroad .....

Some highly paid Programmer Anaysts cannot program in any language and need a low pay programming force, abroad since that's an alienating job, you can see the consequences here :-)
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
A posse ad esse non valet consequentia
Πάντα ῥεῖ
Back to top
View user's profile Send private message
Akkara
Administrator
Administrator


Joined: 28 Mar 2006
Posts: 6520
Location: &akkara

PostPosted: Thu Nov 16, 2017 9:54 am    Post subject: Reply with quote

Read up on the "Imagenet" image recognition competition and the papers of the winning entries. That will give a good idea of what's possible, and of what's involved.

For quick background: There's a training dataset consisting of 60000 or so labeled images. There's 1000 categories of labels. The contest consists of coming up with the best neural net that will recognize a previously-withheld set of images. It was a hard problem, with recognition rates hovering around the 50% mark until 2012, when Alex Krizhevsky and friends blew the competition away with AlexNet, achieving a recognition error rate of just 15% when the next runner-up was getting 25% incorrect.

This marked the first time (as far as I know) that a "deep" network with every layer trained by the data was used. Previously hand-tuned feature extraction was common, with training/learning used only in the latter stages. The big ideas was that (1) training the whole thing from scratch works better than attempts at hand-tuning feature extractors; (2) that GPUs accelerate the process enough to make it feasible; and (3) new training techniques can make convergence of "deep" networks practical.

In the years that followed, "everyone" jumped on that bandwagon and error rates plummeted. In 2014 Google came up with their "Inception" network, achieving something like a 6% error rate. Since then the error rate has dropped to 3% or so, and the recognition problem is essentially considered "solved". The focus of the competition changed, now requiring not only recognition but also bounding boxes around each object, showing where it is in the scene.

It can take weeks to train one of these things well, using a quad-graphics-card system.

Geoffrey Hinton of U.Toronto has some excellent videos on youtube if you want to know the nuts and bolts of it all. Starts from the basics and builds up from there. Highly recommended. But remember to bring your calculus.

As far as what is possible? Lots. And a lot is already in common use:

Google's translation? Runs on neural networks. (Read up on "recurrent" nets)

Google's image search? Essentially the imagenet task, but trained on Google's massive data store using its massive compute clusters.

Facebook's just-about-everything uses neural nets.

Self-driving cars is another big application area for these nets (and many bugs are still being worked out).

Most of the automated censor-bots looking for NSFW, infringing, etc content, is neural-net. And, yes, they don't always work particularly well, as that guy whose recordings of birdsong got pulled due to copyright claims by some major record label can tell you.

The recent news stories of chatbots expressing lots of crude and racist attitudes has a different and curious explanation, according to what I heard: There's apparently a open legal question as to whether a bot trained using data found on the net (other people's text and images) can be thought of as being a "derived work" from that data. To avoid such questions, researchers try to use training data that is public domain. And, apparently, one large trove of non-copyrighted text is various newspapers and public records from the Jim Crow era. Hence, the result.

As far as "Software 2.0": Somebody just got a very nice shiny new hammer. And suddenly a lot of things are starting to look like nails. There will be new applications no doubt. I don't know where it will all lead. I doubt anyone does, at this stage. I expect many kinds of new applications to be discovered. At the same time, I also expect its limitations to slowly become apparent. It will eventually all shake out, and there'll be a bunch of things this is "good at", and a bunch of things that other techniques are more appropriate. It is, in its current form, and as a poster previously had said, essentially a very powerful statistical technique. And like any statistical technique it only works as well as the quality of data you feed it. Because as they say, garbage in, garbage out.

I expect "neural-net architect" to become a very hot-in-demand job function / title.
_________________
Humility means not having to eat as much crow when you are shown to be wrong.
Back to top
View user's profile Send private message
erm67
Apprentice
Apprentice


Joined: 01 Nov 2005
Posts: 230
Location: Where the black men cannot enter

PostPosted: Thu Nov 16, 2017 10:58 am    Post subject: Reply with quote

Akkara wrote:


I expect "neural-net architect" to become a very hot-in-demand job function / title.


While neural-net trainer/programmer will a very hot position abroad ......
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
A posse ad esse non valet consequentia
Πάντα ῥεῖ
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Thu Nov 16, 2017 1:23 pm    Post subject: Reply with quote

Akkara wrote:
It can take weeks to train one of these things well, using a quad-graphics-card system.
That really doesn't seem like much given the benefits to the operator.

Akkara wrote:
The recent news stories of chatbots expressing lots of crude and racist attitudes has a different and curious explanation, according to what I heard: There's apparently a open legal question as to whether a bot trained using data found on the net (other people's text and images) can be thought of as being a "derived work" from that data. To avoid such questions, researchers try to use training data that is public domain. And, apparently, one large trove of non-copyrighted text is various newspapers and public records from the Jim Crow era. Hence, the result.
Interesting. Hadn't heard that and seems plausible.

Akkara wrote:
As far as "Software 2.0": Somebody just got a very nice shiny new hammer. And suddenly a lot of things are starting to look like nails. There will be new applications no doubt. I don't know where it will all lead. I doubt anyone does, at this stage. I expect many kinds of new applications to be discovered. At the same time, I also expect its limitations to slowly become apparent. It will eventually all shake out, and there'll be a bunch of things this is "good at", and a bunch of things that other techniques are more appropriate. It is, in its current form, and as a poster previously had said, essentially a very powerful statistical technique. And like any statistical technique it only works as well as the quality of data you feed it. Because as they say, garbage in, garbage out.

I expect "neural-net architect" to become a very hot-in-demand job function / title.
Would have been nice to be on the in to get such a title.

It seems like a lot of the current problems being solved are the current low-hanging fruit. That is, someone found a good solution and it is now easier to find problems of similar kind to solve with that solution. I agree that limitations will become apparent, but it also seems likely that the general idea could be used elsewhere, even if initially cumbersome.

Someone(s) at one or more of the internet-scale companies could gather systems data and find a way to use in similar fashion, even if the results are used by people rather than automatically. That adds barrier to entry where only those associated with such companies can get the hard and soft (pedigree) skills necessary for those jobs. An example (which I haven't yet read beyond a summary including this statement): Babatunde Olorisade, a Ph.D student at Keele University who authored the study analyzing the 30 AI research papers, says proprietary data and information used by large technology companies in their research, but withheld from papers, is holding the field back.

Interesting times.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
richk449
Guru
Guru


Joined: 24 Oct 2003
Posts: 345

PostPosted: Thu Nov 16, 2017 3:03 pm    Post subject: Reply with quote

Akkara wrote:
...

Nice explanation. My only quibble is:

Quote:
I expect "neural-net architect" to become a very hot-in-demand job function / title.

Forget "to become" - it is already hugely in demand:
AngelList search
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17409

PostPosted: Wed Nov 22, 2017 9:45 pm    Post subject: Reply with quote

Google's DeepMind shows off self-taught AlphaGo Zero AI
Quote:
Google's DeepMind subsidiary has announced the launch of a new Go-playing artificial intelligence capable of trouncing the original AlphaGo, and without ever having had human intervention in its training process.
Quote:
Where AlphaGo was trained on data culled from thousands of games of Go played by humans, AlphaGo Zero has taught itself the game entirely from first principles simply by being handed the rule set and being told to play against itself. Three days after the project launched, DeepMind has claimed, AlphaGo Zero had already surpassed the version of AlphaGo which beat Lee Se-dol in 2016; by day 21 it had reached the level of the carefully-trained AlphaGo Master, which beat 60 top-class human players; by day 40 it had become, by Elo rating, the greatest Go player in human or machine history.


Some hardware details and links at source.
_________________
Slowly I turned. Step by step.
Back to top
View user's profile Send private message
erm67
Apprentice
Apprentice


Joined: 01 Nov 2005
Posts: 230
Location: Where the black men cannot enter

PostPosted: Thu Nov 23, 2017 10:14 pm    Post subject: Reply with quote

I don't know why but I am always suspicious of articles like that, now they say that playing against itself it had become, by Elo rating, the greatest Go player in history:


Quote:
AlphaGo Zero, though, is something very different. Where AlphaGo was trained on data culled from thousands of games of Go played by humans, AlphaGo Zero has taught itself the game entirely from first principles simply by being handed the rule set and being told to play against itself. Three days after the project launched, DeepMind has claimed, AlphaGo Zero had already surpassed the version of AlphaGo which beat Lee Se-dol in 2016; by day 21 it had reached the level of the carefully-trained AlphaGo Master, which beat 60 top-class human players; by day 40 it had become, by Elo rating, the greatest Go player in human or machine history.


This is how Elo rating are calculated:

Quote:
A player's Elo rating is represented by a number which increases or decreases depending on the outcome of games between rated players. After every game, the winning player takes points from the losing one. The difference between the ratings of the winner and loser determines the total number of points gained or lost after a game. In a series of games between a high-rated player and a low-rated player, the high-rated player is expected to score more wins. If the high-rated player wins, then only a few rating points will be taken from the low-rated player. However, if the lower rated player scores an upset win, many rating points will be transferred. The lower rated player will also gain a few points from the higher rated player in the event of a draw. This means that this rating system is self-correcting. A player whose rating is too low should, in the long run, do better than the rating system predicts, and thus gain rating points until the rating reflects their true playing strength.


The difference between the ratings of the winner and loser determines the total number of points gained or lost after a game. By definition if he plays against itself its Elo rating doesn't change ...... fake news maybe?
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
A posse ad esse non valet consequentia
Πάντα ῥεῖ
Back to top
View user's profile Send private message
The Doctor
Moderator
Moderator


Joined: 27 Jul 2010
Posts: 2474

PostPosted: Thu Nov 23, 2017 11:21 pm    Post subject: Reply with quote

Neural nets sound really impressive when they are presented that way, but fundamentally they are so simple. The complex language is kind of a smoke screen. I remember what one of my CS professors said. "invent your own lingo and your pay goes up." He wasn't kidding.

All the network does is take an array of inputs and does a little bit of matrix math. The training is done via a calculus minimization problem. Those can get involved but conceptually they are very simple. Just take a derivative type of stuff.

Training is done by taking inputs and outputs and defining an error function based on that training data which is minimized. Actually really easy.

The self learning is a little more involved but not much. A self learning system adds an algorithm which knows what a good result is, such as winning a game, and what a bad result is. The means that the program can now produce its own training data.

A really cool result from extremely simple code. Don't get me wrong, it is really useful and fascinating but not magical.
_________________
First things first, but not necessarily in that order.

Apologies if I take a while to respond. I'm currently working on the dematerialization circuit for my blue box.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1246
Location: Idaho, USA

PostPosted: Fri Nov 24, 2017 3:51 am    Post subject: Reply with quote

The Doctor wrote:
...

Needs more buzzwords.
_________________
Gun: Glock 19 Gen 4
Sights: XS DXT Big Dot
Holster: StealthGear VentCore IWB
Ammunition: Federal Premium HST 124gr
Light: Inforce APLc
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Off the Wall All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum