Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Several changes for smoother results and more control over the NN. #116

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

iciclesoft
Copy link

Changed mutation to also check if it should mutate at all based on the rate. Changed the mutation rate because of the mentioned mutation-changes. Doubled vehicle.maxforce so it has more control over itself. Changed the border-inputs according to the entire canvas instead of the 50px wide stroke. Lastly I've added a fallback for when all vehicles are about to die, which would probably never happen in the current reproduction-rate.

The mutation-changes are mainly so not every reproduced vehicle is mutated (which pretty much was the case). The vehicle.maxforce seems to have quite some impact on the results over time, setting this to an insane amount will get some 100k+ vehicles (framecount score) quite easily. The border changes were made so the vehicles don't get stuck in a corner. The fallback is quite useless now, but it would be nice if you want to change the reproduction-rate.

…rate. Changed the mutation rate because of the mentioned mutation-changes. Doubled vehicle.maxforce so it has more control over itself. Changed the border-inputs according to the entire canvas instead of the 50px wide stroke. Lastly I've added a fallback for when almost all vehicles die, which would probably never happen in the current reproduction-rate.
Copy link
Member

@shiffman shiffman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These look great and seem to simplify the example! I'm curious if @meiamsome has any thoughts before merging.

inputs[1] = this.position.y / height;
// These inputs are the distance of the vehicle to east- and west borders
inputs[2] = 1 - inputs[0];
inputs[3] = 1 - inputs[1];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this just equivalent to the AI learning negative weights/biases from inputs[0] and inputs[1], though?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because all input-nodes are linked directly to all hidden-nodes I would say no in this case. If the hidden-layer was a multi-dimensional array I would say yes. The main reason for this change was because it felt a bit too 'cheaty' for me I guess and the vehicles also showed some strange behavior because the input would always be 0 until they get within this border. To be clear though I am no expert in neuro evolution, in fact I learned most of what I know about this topic from watching The Coding Train :)

Copy link
Author

@iciclesoft iciclesoft Apr 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I would say no in both cases. I do however agree that duplicating inputs 0 and 1 directly into 2 and 3 would have the exact same effect, but what it does do is help solving the XOR-problem for the border.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But duplicating them exactly would have no net effect. Simplified, a NN with two equivalent inputs and one middle layer node would have two weights w_11, w_21 for example, and two biases b_1, b_2, right? If so, then couldn't you build a 1-input NN with the same behaviour by just summing the weights and biases together?

I also have no idea about these things either, which is why I asked if it was actually different!

Copy link
Author

@iciclesoft iciclesoft Apr 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright nice, so my reasoning was that with having 1 input, the NN must use a single-value to represent 3 outcomes, namely 0 and 1 for 'close to border' and 0.5 for 'not close to border'. By using 2 inputs, it can use one value to represent 2 outcomes and the other value to represent 2 outcomes. This however is wrong, as can be seen here: http://www.iciclesoft.com/preview/nn-test

The code for these testcases can be found at https://github.com/iciclesoft/NN-Test

The remaining question however is if we want to go back to just the x and y positions (like it was before) or if we want to go with the border stroke. My vote would be for the x and y positions, which they seem to pick up quite nicely after having it run for +/- one minute at 100x speed. Edit: Did some more testing today, sometimes the borders are picked up very fast, other times they seem to be ignoring the borders for quite some time.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@meiamsome So I've done some more tests, the previous tests were mainly about the ability of a certain neural network to find both the north and south borders on a plane. I've changed these tests to also include an average of training cycles it needs to make a distinction between the two borders and the 'middle ground'. Here we can actually see a difference between having one or two inputs and even a difference in the way the inputs are given.

Mostly the results I see show that having two inputs, where the second input is an invert of the first, need the least update-cycles to complete. This is usually slightly 'better' than having two of the same inputs (which is quite strange). Having just one input is usually about 30-40% slower. This is the case where the neural networks have 32 hidden nodes. To make it even more strange, when the nn has only 4 hidden nodes, the average updates needed are a lot closer to eachother.

It gets even stranger, before I had the 'allowed error rate', which is used to determine if a test is succesfully completed, at 1% instead of 2. I would say that this wouldn't affect the results a lot, since it's the same for each test, but in these tests having two of the same inputs usually required the least updates (instead of two inputs, where the second is the invert of the first).

All in all it seems that having multiple inputs, wether they are inverted or not, does help the neural network to learn about the borders quicker.

By the way I've updated both http://www.iciclesoft.com/preview/nn-test and https://github.com/iciclesoft/NN-Test if you're interested in the tests.

@CodingTrain CodingTrain deleted a comment from iciclesoft Apr 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants