Even after centuries of studies, we still have very little hard scientific knowledge about natural languages (NLs). Unlike in other branches of engineering, we don’t know the exact physical or mathematical laws which NLs follows, or even whether they do. So, at least for the time being, we can only rely on empirical techniques for solving practical problems in Natural Language Processing (NLP). Even after some general approach seems to hold promise for solving a problem, a lot of practical work remains to be done in refining the methods and in tuning the systems for the best possible performance. This is why once some initial breakthrough has been made, a lot of people have to try the techniques under different conditions to figure out what is the best setup, i.e., the best selection of parameter values, features, etc. What has come to be called a ‘shared task’ is one way of ensuring that this gets done.
Shared tasks are contest like events where many researchers or even developers working on a particular problem or a set of similar problems try to come up with the best systems. All the systems are evaluated on the same data to provide a fair, competition like setting. All the participants also have to submit papers describing their systems. The major goals of a shared task are:
- To find out what is the state of the art in a specific area
- To simultaneously advance the state of the art, even if slightly
- To bring together researchers so that they can interact and perhaps argue and discuss
- To act as an incentive for the researchers to build proper systems, some of which may become available for use by others
It was in view of this that the NLP Association of India (NLPAI) started conducting an annual event called the NLPAI Machine Learning Contest in which researchers, including students, are invited to participate and compete in solving a specific problem which is considered relevant. Last year the topic of the shared task was Shallow Parsing for South Asian languages. A workshop was also organized as an extension of this event as part of the IJCAI conference, which was held in Hyderabad, India. The topic this year was Named Entity Recognition for South and South East Asian languages. This year’s event will also have an extended version in the form of a workshop as part of the IJCNLP conference, which is also going to be held in Hyderabad, India.
In the context of South Asian languages, conducting a shared task has its own problems. This is because funding for them is usually unlikely. Without funding it is difficult to prepare the reference data which is usually essential for a shared task. Those who have annotated data are often unwilling to share it with others. IIIT has taken a lead in preparing annotated data for various purposes and also sharing it with others. Since the data is prepared under difficult conditions, sometimes there are problems with the data, but let’s hope things will improve. In any case, data with some errors is better than no data.
Another problem is that the number of full time researchers in NLP is quite small in South Asia, which affects the quality of submissions, but the shared tasks are meant to get over this situation by creating awareness and interest.
It needs to be emphasized that the goal is not just to show good performace on the data provided but also to build practically usable systems that perform well in general. This implies that the participants are supposed to go beyong being mere competitors in a contest. And the idea is to go further than just being the first in the race. Participation in a shared task should be a milestone, not the final destination.
I feel compelled to end this write up by saying that shared tasks with focus on South Asia can only succeed if there is collaboration and sharing of resources by researchers working in South Asia. We are still far from that situation.
(This write up was originally written for the NLPAI newsletter called Spandan, but it was taking a lo…ng time, I became impatient and so you find it here)