Sudeepam

Joined 11 March 2018
11,584 bytes removed ,  21 March 2018
Line 156: Line 156:


::- That depends on the task. I'd say that, if the outcome is defined or at least predictable with some decent accuracy, I'll discuss the problem statement and immediately get down to code. Otherwise, I'll code up a '''small model''' first to see if the approach really would work and proceed thereafter.
::- That depends on the task. I'd say that, if the outcome is defined or at least predictable with some decent accuracy, I'll discuss the problem statement and immediately get down to code. Otherwise, I'll code up a '''small model''' first to see if the approach really would work and proceed thereafter.
== Y: Your task ==
* '''Did you select a task from our list of proposals and ideas? If yes, what task did you choose? Please describe what part of it you especially want to focus on if you can already provide this information.'''
::Yes, I have decided to work on the '''command line suggestion feature''' [https://savannah.gnu.org/bugs/?46881)]. It will suggest corrections to the user whenever she/he make a typographic error. This feature is essentially a complex, decision making problem and therefore, I will approach it with Neural Networks, made using Octave (m-scripts) itself.
::''My special focus would be to have a minimal trade-off between the accuracy and speed of the feature.'' Let me first describe the three kinds of Neural Networks that we can end up making (Depending on the training data available).
* '''A network trained with only the correct spellings of the inbuilt functions'''
: This type of network would be very easy to make because only a list of all the existing functions of GNU Octave and no additional data will be required. With this approach, we would end up creating a Neural Network which would easily understand typographic errors caused due to '''letter substitutions''' and '''transportation of adjacent letters'''. In-fact, this network would understand multiple letter substitutions and transportations also and not only single letter substitutions or transportations. I say this with such confidence because I have already made a working neural network of this type [https://github.com/Sudeepam97/Did_You_Mean]. This network would however, perform poorly if an error is caused due to '''accidental inclusion''' or '''accidental deletion''' of letters.
* '''A network trained with the correct spellings of the functions and self created errors'''
: This would be slightly harder to make but should give us an improved performance. I will '''create some misspellings '''of all the functions, by additional inclusion, deletion, substitution, and transportation of one or two letters and then add all these self created misspellings to the dataset which will be used to train the network. Such a network would understand what correct spellings and random typographic errors look like. It will easily understand substitutions and transportations like the previous network but would also be more accurate while predicting errors caused due to additions/deletions. However, it is worth mentioning here ''that we may create errors while creating errors.'' Because our training data will be modified randomly, although the chances are rare, the Neural Network may show uncertain behaviour.
* '''A network trained with the correct spellings of the functions and the most common typographic errors'''
:To make this kind of Neural Network, we need to know what common typographic errors look like. With that goal in mind, I have already contacted the people behind octave-online.net [https://octave-online.net/] who say that they are happy to support the development of GNU Octave and have shared a list of ''top 1000 misspellings'' with me through email. However the users of octave-online.net are only one of the parts of the entire user group. '''For best results,''' we  would require the involvement of the entire Octave community, which, also implies that it will be the hardest and the most fun Neural Network to make.
:By creating a script that would be able to catch typographic errors and by asking the users of GNU Octave to use this script and share the most common spelling errors with us, and training the network on the dataset thus created, we’ll create a Neural Network which would understand what correct spellings and the '''most common''' typographic errors look like. Such a network would give good results, almost every-time and with all kinds of errors. This is because when our network knows what common errors are like, most of the times it would '''know the answer''' beforehand. For the remaining times, the network would be able to '''predict the correct answer'''.
:: I understand that using Neural Networks may seem like an overkill and that one could think about using traditional data structures like tries, or algorithms like 'edit distance' which are made for exactly these kinds of problems. However, I have chosen neural networks because, after due consideration, as described below, to me, neural networks look like the best solution to minimize the trade-off between speed and accuracy of the feature.
::Edit distance, while being accurate, would be the slowest approach of the three, and tries, though fast, would not be able to generalize to unknown typographic errors. Neural networks, however, when trained with proper data, would be highly accurate, would generalize to unknown typographic errors, and because of the fact that ultimately '''a 'trained' Neural Network will be merged''' with core Octave, this approach will be fast as well. Another disadvantage when using tries that I'd like to mention is that, if, say, we are unable to arrange a sufficiently large list of '''common spelling errors''', a trie would fail miserably, however, a neural network even in that case, would easily identify letter substitutions and transportations of adjacent letters.
::At a later stage (possibly after GSoC), I could merge the data extraction script with Octave so that the performance of the Network could be improved with time. This could come with an easy disable feature, so that only the users who would like to share their spelling errors would do so.
*'''Please provide a rough estimated timeline for your work on the task.'''
:'''Preparations for the project (pre-community bonding)'''
::While this application is being reviewed, I have started working on a m-script which will be used for the extraction of the most common spelling errors. (I’m assuming that I’ll be working on the third kind of classifier neural network). This script will catch the most common typographical errors that the users make. This list of errors could then be...
:::-Uploaded to a secure server directly.
:::-Stored as a text file and we can ask the users to share this file with us.
::I’d like to mention here that the data we receive would essentially be a list misspelled functions only. If there is no user meta data attached to it, we don’t really need to hide it from anyone, all we would need is that no unauthorized person could modify/delete it. Nonetheless, how exactly we receive the data collected from the users would need some discussions with the community.
:'''Community Bonding period'''
::I will use the community bonding period to...
:::-Persuade the community to use our data extraction script and help us collect training data. This can be done by discussing the benefits of a command line suggestion feature and sharing my current implementation of this feature [https://github.com/Sudeepam97/Did_You_Mean].
:::-Ask the community to report issues with the m-script containing the current implementation. I’ll shift the current implementation to mercurial if required.
:::-Discuss how we should receive the data generated by the users, work on the approach, and start the collection of data.
:::-Organize the data as it is received and divide it to create proper, training, cross-validation, and test sets for the Neural Network.
:'''May, 14 – June, 10 (4 weeks)'''
::'''Week 1 (May, 14 – May, 21):''' I would not be able to do a lot of work in this week as I have my final examinations at this time. I’ll take this week as an extension of the community bonding period and use it to collect issues, collect more data and divide it into proper datasets.
::'''Week 2 and Week 3 (May, 21 – June, 3):''' Most of the code of the Neural Network would be identical to my current implementation and so I’ll start by making my current implementation bug free (Some known issues can be found here: [https://github.com/Sudeepam97/Did_You_Mean/issues]) and by coding it according to the Octave coding standards. I plan to keep the user data coming for these weeks also and so I’ll leave room for network parameters such as the number of hidden layers and the number of neurons per hidden layer because these are data dependent parameters. If all this work gets completed before the expected time, I’ll automatically move on to complete next week’s work.
::'''Week 4 (June, 4 – June, 10):''' By now we will have sufficient data, data from octave-online.net and from approximately 6 weeks of extraction script’s usage. I’ll quickly give a final look to the data and start training the Neural Network with it. I will choose appropriate values of the data dependent network parameters which, while keeping the speed of the Neural Network fast, would fit the learning parameters (weights) of the Neural Network to our data with a high level of accuracy. I would then measure the accuracy of the Network on cross validation and test sets and see how our network generalizes to unknown typographic errors. I will also write some additional tests for various m-scripts used.
::'''Phase 1 evaluations goal:''' A set of working neural network m-scripts, which could suggest corrections for typographic errors.
:'''June, 11 – July, 8 (4 weeks)'''
::'''Week 5 (June, 11 – June, 17):''' I’d like to take this week to work in close connection with the community and perform tests on the newly created m-scripts. Essentially, I’ll be asking the community to try out our m-scripts and see how they work for them. I will work on the issues pointed out by the community and by the mentors as they are reported and would try to make the m-scripts perfect in this week itself.
::'''Week 6 (June, 17 – June, 24):''' I’ll fix any remaining issues and proceed to discuss and understand how our Neural Network should be integrated with Octave. I’ll start working on integrating the network as soon as the approach is decided. It is worth mentioning here that we will merge a trained network with Octave and therefore the chances of our code being slow are eliminated.
::'''Week 7 – Week 8 (June, 25 – June, 8):''' I will integrate our neural network with Octave as discussed, and write, and perform tests to make sure that everything works the way it should. If this task gets completed earlier than expected, I’ll automatically move on to the next task.
:'''Phase 2 evaluations goal:''' A development version of Octave which has a command line suggestion feature (currently there will be no mechanism available to easily select the corrections suggested and easily enable/disable this feature).
:'''July, 9 – August, 5 (4 weeks):'''
::'''Week 9 (July, 9 – July, 15):''' The development version of Octave, with an inbuilt suggestion feature will be open for error reports. I’ll work on the issues as they are reported and also discuss what an easy enable/disable mechanism and the mechanism to easily select the corrections suggested should be like.
::'''Week 10 (July, 16 – July, 22):''' I’ll create the required mechanisms as discussed, write and perform tests, and push a development version with a complete command line suggestion feature.
::'''Week, 11 – Week, 12 (July, 23 – August, 5):''' I’ll work in close connection with the community, fix the issues that are reported, and ask for further suggestions on how the command line suggestion feature could be made better.
:'''Phase 3 evaluations goal:''' A development version of Octave with a complete and working command line suggestion feature, open to feedback and criticisms
98

edits