COMPOSES END-OF-PROJECT WORKSHOP
The end-of-project workshop of the Composes project took place on Sunday August 14th 2016 in Bolzano (Italy) as a satellite event of ESSLLI 2016.
The workshop was an occasion to discuss some exciting topics in
computational semantics, with great invited speakers leading the
discussion (some of the presentation slides are available below).
Information the venue is available here.
Germán Kruszewski, Angeliki Lazaridou, Nghia The Pham, Aurelie Herbelot, Denis Paperno, Gemma Boleda, Sandro Pezzelle, Marco Baroni, Raffaella Bernardi, Roberto Zamparelli, Irena Jatro.
Registration was free but mandatory, and managed through the ESSLLI registration page.
Write to irena jatro AT unitn it.
Please check out the related DSALT workshop at ESSLLI 2016!
- Lessons learned from the Composes project: Which problems
were we trying to solve? Have we solved them? Have new-generation
neural networks made compositional distributional semantics
- End-to-end models and linguistics: What is
the role of linguistics in the (new) neural
network/end-to-end/representation learning era? Do such systems need
linguistics at all? Are some linguistic theories better tuned to them
than others? Is there an appropriate vocabulary of linguistic units
for end-to-end systems? Is compositionality a solved problem? Which
linguistic challenges are difficult to tackle with neural
- Fuzzy vs precise (concepts vs entities,
generics vs specifics, lexical vs phrasal/discourse semantics, analogy
vs reasoning, sense vs reference): Are learning-based statistical
methods only good at fuzzy? Can new-generation neural networks
(Memory Networks, Stack RNNs, NTMs etc) handle both fuzzy
and precise? Is fuzzy a solved problem?
- Learning like humans do: If we want to develop systems
reaching human-level language understanding, what is the appropriate
input? What should training data and objective functions look like?
What are appropriate tests of success? Assuming our methods are much
more data-hungry than human learning is, why is this the case? Ideas
for fixing that? What ways can we teach our models to understand, other
than through expensive labeling of data?
We gratefully acknowledge the European Commission and European Research
Council for the COMPOSES Starting Independent Research Grant funded
under the 7th Framework Program.