It’s time for voting in the challenge (See the announcement). Two and a half weeks have brought us ten submissions in the AWS DeepRacer log analysis challenge. I think this is a lot. Many of the contestants submitted code to open source or public domain projects for the first time and I know how hard it can be, not only because of technical challenge – it also takes courage to show one’s code and expose oneself to opinions of others. To me all of you who decided to take part have accomplished something big.
What happens now?
Once the descriptions of the changes are published, voting will begin. The voting will take place between 00:00 BST on Thursday 24th of October 2019 and 23:59 GMT on Thursday 31st of October 2019.
Each change will have an associated message in the #league-contest channel of the community Slack, voting will be posted publicly in threads under each message. Then we will count the votes, add my special votes and announce the winners.
Remember you have one vote.
Submissions
Let’s have a quick walk through the changes proposed. Entries are listed in the order of submission time.
GummyBear
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/2
First PR came with a mixture of minor improvements and big additions.
The most noticeable minor improvement is applying logarithm onto the reward. When top rewards granted differ significantly from the mean value, the reward graph can look like a big spike for a couple steps and the rest is indistinguishable from zero. Putting the values through a logarithm function reduces the scale of the difference.
GummyBear added graphs for reviewing the parameters throughout the training:
(this can be run for any parameter in the logs)
Action breakdown presents how frequently a given action was chosen:
There is a plot of steering decisions depending on the location on the track:
There is some analysis about unexpectedly rewarded episodes:
And finally, there is a plot for a new reward with values put as labels on the track:
Evanca
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/3
Evanca has looked at how overwhelming the notebook can become to a new-starter. She has therefore replicated the console graph to give users a familiar starting point:
Daniel Morgan
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/4
His change adds a plot of the track with starting point and every tenth waypoint highlighted and saves it as a printable png file:
Tom17
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/5
Tom17 decided to dig around in the services to find information about the jobs being run. He uses the fetched information to provide more details about the training being analysed, but also he adds some widgets into the notebook so that users can select a log instead of writing code to choose it.
Finlay Macrae
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/7
Finlay also focused on analysing the track through finding and visualising its characteristics:
Apart from that he has added some documentation to code to make it easier for the next users to pick up working with it.
Tony Markham
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/8
Tony took a different approach and focused on simplifying the local training through a couple utilities that detect the log files and load them, but also load the hyperparameters and autmatically detect amount of episodes per iteration. He also attached a whole document describing how to use it:
Ahmed Shendy
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/9
This is a change focused on improving work with multiple logs coming from different sources. Ahmed’s addition is pretty much a standalone log manager that fetches information about log files (both local and in CloudWatch), detects the track name, model name, dates of training and generates a notebook based on a template to start working with a selected file. Information fetched from AWS is cached to avoid needless repetitive downloads.
Chris Thompson
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/10
Chris brings us analysis of the car’s exit points throughout the training:
Look for:
Chris’ description of the graphs
a) Many different colors in one spot means car has trouble navigating in that area regardless of start position
b) Many same colors in one spot means the car had trouble with particular starting position and could be disregarded if it was due to middle-of-track starts during training, but should be looked at if the start position was the actual track zero waypoint
Cahya Wirawan
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/11
Cahya has focused on providing more plots of the track with which one can interact with a mouse:
When you move your mouse over various elements of the plot, additional details are being presented.
Apart from that he also prepared a utility to run log analysis using docker.
Mithilesh Hinge
Pull request: https://github.com/aws-deepracer-community/aws-deepracer-workshops/pull/12
Mithilesh has presented a wide range of changes and improvements:
In my log-analysis I have:
1. Removed the steps/time graph and replaced it with a new graph. This shows the difference between histograms of actions used in the first iteration vs the last iteration. When you want to speed up your model, this will be quite useful to see how frequently the actions with higher-speed values are triggered, and whether their frequency has increased over the training.
2. Colour coded the almost useless total_reward/episode graph according to start_at. The original graph varies a lot over the episodes rendering it useless. This variation is because each episode starts at a different waypoint. So if there is a hard turn at waypoint 100, an episode starting at waypoint 0 will have a quite higher total reward than an episode starting at waypoint 80. But if total_rewards of episodes with similar start_at positions are compared, we are able to find a pattern, making it a quite useful tool. Colour coding this graph according to start_at solves this problem.
3. Added greyscale colourmap to the bottom 3 graphs of scatter_aggregates. This graph is very useful already, and helps us find bottlenecks in the track where our model is struggling by giving us exact waypoint numbers. But it tells very little about whether our current training session has improved the model in these bottleneck areas or not. Adding a grayscale colourmap shows movement of the 3 graphs across the iterations. This coupled with point 2 makes for a very powerful analysis tool.
4. Improved the code for plotting paths taken in an entire iteration. This is a useful tool and is used by everyone who doesn’t want to keep starting at the screen for the entire training session. In the original code it took about 10-20 seconds to plot all the paths in one iteration, making it very frustrating to keep checking up on the paths in each iteration during training. The enhanced code does this in less than a second. You can add code to plot multiple graphs of multiple iterations if you’d like to, without worrying about needing to wait for minutes.
Possible improvement: I am currently fiddling with the idea of using a similar code to improve the speed of ‘heatmap plotter of all rewards’ (the red and black graph). Similar to the above, this code is also used quite frequently but requires an unneccesarily long time to execute due to the ‘for’ loops.
I’d be really glad to see someone pull this off.5. Added a cell that plots the perfect race line within seconds. This code runs a dummy car across a dummy track using bresenham raycasting to find the perfect race line.
Possible improvement: Currently the perfect race line that is plotted will overlap a bit because I have not provided a very intelligent stopping condition. The code runs for fixed number of steps, this could be improved to merge the overlapping regions to get a final race line. Another improvement is printing the entire raceline’s co-ordinates so that it could be used in the reward function. I have done this in my personal log-analysis notebook and intend to make a commit soon. In the meantime if you’d need to implement it for your own use right now, contact me on slack.6. Added a boolean flag called “wp” that enables you to label waypoints on the track in any graph in the log analysis notebook.
7. Improved new_reward feature. This feature produced slightly inaccurate reward values due to different implementations of df_to_params function in the log_analysis.py and inside the actual simulation. Also it was incomplete. Now it is implemented exactly as inside the simulation, also ‘track_width’ and ‘is_left_of_center’ params are now available for those who use them in their reward functions (again, implemented exactly as inside the simulation).
Mithilesh on his changes
Summary
I encourage you to join the community and vote for the entry you find most valuable. Let the voting start!
Well done everybody! Think you all deserve some credits 🙂