User:Sudeepam: Difference between revisions

Jump to navigation Jump to search
5,946 bytes removed ,  21 December 2018
no edit summary
No edit summary
Line 216: Line 216:
:'''Last days of GSoC:'''
:'''Last days of GSoC:'''
::During the last days of GSoC, I’ll try to improve the command line suggestion feature, based on the feedback received.
::During the last days of GSoC, I’ll try to improve the command line suggestion feature, based on the feedback received.
== Project Description ==
My special focus is to have ''a minimal trade-off between the speed and accuracy of the feature''. Before talking about that, let me first describe the three kinds of Neural Networks (Depending on the training data available) that we can end up making.
:'''1) A network trained with only the correct spellings of the inbuilt functions'''
This type of network would be very easy to make because only a list of all the existing functions of GNU Octave and no additional data will be required. With this approach, we would end up creating a Neural Network which would easily understand typographic errors caused due to '''letter substitutions '''and '''transportation of adjacent letters.''' In-fact, this network would understand multiple letter substitutions and transportations also and not only single letter substitutions or transportations. I am confident about this because I have already made a working neural network of this type [https://github.com/Sudeepam97/Did_You_Mean]. This network would however, perform poorly if an error is caused due to '''accidental inclusion''' or '''accidental deletion of letters.'''
:'''2) A network trained with the correct spellings of the functions and self created errors'''
This would be slightly harder to make but should give us an improved performance. I will '''create some misspellings''' for all the functions, by additional inclusion, deletion, substitution, and transportation of one or two letters and then add all these self created misspellings to the data-set which will be used to train the network. Such a network would understand what '''correct spellings and random typographic errors''' look like. It will easily understand substitutions and transportations like the previous network but should also be more accurate while predicting errors caused due to additions/deletions. However, it is worth mentioning here that ''we may create errors while creating errors.'' Because our training data will be randomly modified for this network, although the chances are rare, the Neural Network may show uncertain behaviour.
:'''3) A network trained with the correct spellings of the functions and the most common typographic errors'''
To make this kind of Neural Network, we need to know what common typographic errors look like. With that goal in mind, I have already contacted the people behind octave-online.net [https://octave-online.net/] who say that they are happy to support the development of GNU Octave and as of now (25th March), have shared a list of top 1000 misspellings with me through email. However the users of octave-online.net are only, one of the parts of the entire user group. '''For best results''', we would require the involvement of the entire Octave community, which, also implies that it will be the hardest and the most fun Neural Network to make.
By creating a script that would be able to catch typographic errors and by asking the users of GNU Octave to use this script and share the most common spelling errors with us, and training the network on the data-set thus created, we’ll create a Neural Network which would understand what '''correct spellings and the most common typographic errors''' look like. Such a network would give good results, almost every-time and with all kinds of errors. This is because when our network knows what common errors are like, most of the times it would '''know the answer''' beforehand. For the remaining times, the network would be able to '''predict the correct answer'''.
At a later stage (possibly after GSoC), I could merge the data extraction script with Octave so that the performance of the Network could be improved with time. This could come with an easy disable feature, so that only the users who would like to share their spelling errors would do so.
I understand that using Neural Networks may seem like an overkill and that one could think about using traditional data structures like trie, or algorithms like 'edit distance' which are made for exactly these kinds of problems.
However, edit distance, while being accurate, would be the slowest approach of the three because we essentially would need to calculate the edit distance between the input and '''all the functions''' of Octave and tries, though fast, would not be able to generalize to unknown typographic errors. Neural networks, however, when trained with proper data, would be highly accurate, would generalize to unknown typographic errors, and because of the fact that ultimately '''a 'trained' Neural Network''' will be merged with Octave, this approach will be fast as well.
Another disadvantage when using trie that I'd like to mention here is that, if, say, we are unable to arrange a sufficiently large list of common spelling errors or if an errors is made while typing the first few characters of the function, a trie would fail miserably, however, a neural network even in that case, would easily identify letter substitutions and transportations of adjacent letters.
This is why, after due consideration, as described above, to me, '''neural networks look like the best solution to minimize the trade-off between speed and accuracy of the feature''' and this is the reason why I have chosen to use them.
Also, when using Neural Networks, we '''have an option''' to not decide a definition of 'close' because, the neural network, because of sigmoid activation, will find the most probable match on its own.
However, 1 out of 100 times, a neural network could make very ambiguous predictions and so for an even better user experience, what we could add, is a 'control' to make sure that a very ambiguous output by the Neural Network is not shown to the user.  A simple control could be a 'close' edit distance between the output of the neural network and the misspelled input that the user gave. Again, if this extra feature, for an even better UX is added, a close will have to be defined.
98

edits

Navigation menu