Reverse EXPO is an event for graduate students from Computing Science and Electrical and Computer Engineering at the University of Alberta to present their work and gather feedback from alumni and industry. It is also an opportunity for our friends in industry and government to mingle with like minded students and faculty about the problems and technologies we like so much. Reverse EXPO is meant as an annual event, hosted jointly by both departments.
When: February 22, 2017
Where: Computing Science Centre B-2
8:30 - 9am: Welcome and opening remarks
9am - 10am: Invited Talk: Reza Sherkat, alumnus (*)
10am -10:30: Coffee break
10:30 - 12:30: Student presentations and demos
12:30 - 2pm: Posters & lunch
2pm - 3pm: Industry panel
3pm: Closing and coffee
Reverse EXPO will always feature an alumnus with relevant industrial experience, to be shared with our current students.
(*) Reza Sherkat is a Software Research and Development Expert at SAP Canada. He obtained his PhD from the University of Alberta in 2007, under the supervision of Davood Rafiei. Since graduation, he has been actively contributing to the development of three major commercial database systems; IBM DB2, SAP SQL Anywhere, and SAP HANA, with a focus on designing data storage and query processing algorithms. Reza was also a Postdoctoral Fellow in the Computer Science Department at The University of Hong Kong, where he worked primarily on privacy preserving algorithms to publish high dimensional sequence data. He pursues research on data structures and algorithms for high performance data processing systems.
The presentations/demos are intended to expose recent and ongoing research related to Computing Science and Engineering to industry and government representatives, with two goals in mind:
- Identifying common interests and opportunities for collaboration through joint research grants and/or graduate student internships; and
- Promote our students to potential employers.
M. Mirzaei. Multi-Aspect Review-Team Assignment using Topic Clusters
Abstract: Reviewer assignment is an important task in many research-related activities, such as conference organization and grant-proposal adjudication. The goal is to assign each submitted artifact to a set of reviewers who can thoroughly evaluate all aspects of the artifact's content, while, at the same time, balancing the workload of the reviewers. In this paper, we focus on textual artifacts such as conference papers, where both (aspects of) the submitted papers and (expertise areas of) the reviewers can be described by terms and/or topics extracted from the text. We propose a method for automatically assigning a team of reviewers to each submitted paper, based on clustering the latent topics of the reviewers' publications. Our method uses the topic-cluster information to find a team of reviewers for each paper, such that each individual reviewer and the team as a whole cover as many paper aspects as possible. We experimentally demonstrate that our method outperforms all state-of-the-art approaches regarding several standard quality measures.
S. Fung. A Gradient Syntactic Model for Unsupervised Parsing
Abstract: Unsupervised parsers analyze the grammatical structure of natural language without needing annotation in training. They have great application potential, in any tasks requiring in-depth linguistic analysis (e.g. machine translation, information extraction). However, currently, they are not yet reliable enough for commercial application. The limitations lie in the syntactic models being used: 1) they are based on traditional linguistic concepts that are difficult to rigorously define; and 2) they use discrete categories that fail to capture the behaviours of individual words and phrases. To address these shortcomings, I propose a Gradient Syntactic Model. Discrete parts of speech are replaced by lexical-syntactic distances between words. With these distances, neighbourhoods can be formed for each word, consisting of the most similar words to it. Phrases are no longer binary entities (either existing or not), but can be of varying degrees of coherence. Lastly, syntactic dependencies are replaced by conditional probabilities between words, phrases, and their neighbourhoods. Together, this model leads to a new definition of grammaticality, based on the sum of the strengths of the conditional probabilities within a sentence. Results from limited training show that the model is able to capture many of the expected syntactic patterns; more iterations of the training algorithm and a larger training corpus should bring better results. Moreover, the gradient nature of the model allows it to capture much more fine-grained patterns, and the new measurements it uses to identify them are much more clearly-defined. The use of gradient models can thus provide grammatical analyses that are both more detailed and more meaningful.
C. P. Quintero. Pointing gestures for Cooperative Human-Robot Manipulation Tasks in Unstructured Environments
Abstract: In recent years, robots have started to migrate from industrial to unstructured human environments. However, this migration has been at a slow pace and with only a few successes. One key reason is that current robots don't have the capacity to interact well with humans in dynamic environments. Finding natural communication mechanisms that allow humans to interact and collaborate with robots effortlessly, is a fundamental research direction to integrate robots into our daily living. In this talk, we'll bee addressing pointing gestures for cooperative human-robot manipulation tasks in unstructured environments. By interacting with a human, the robot can solve tasks that are too complex for current artificial intelligence agents and autonomous control systems. Inspired by human-human manipulation interaction, in particular how humans use pointing and gestures to simplify communication during collaborative manipulation tasks. We developed a robot system that is able to see interpret and act using pointing and symbolic gestures.
S. Valipour. Incremental Learning for Robot Perception through HRI
Abstract: Scene understanding and object recognition is a difficult to achieve yet crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have shown success in this task. However, there is still a gap between their performance on image datasets and real-world robotics scenarios. We present a novel paradigm for incrementally improving a robot's visual perception through active human interaction. In this paradigm, the user introduces novel objects to the robot by means of pointing and voice commands. Given this information, the robot visually explores the object and adds images from it to re-train the perception module. Our base perception module is based on recent development in object detection and recognition using deep learning. Our method leverages state of the art CNNs from off-line batch learning, human guidance, robot exploration and incremental on-line learning.
- E. Santos. Syntax Errors! How to fix them and how to find them
- C. Pang. Continuous Maintenance
- A. St Arnaud. Morphological Reinﬂection via Discriminative String Transduction
- Y. Qian. 3D Reconstruction of Transparent Objects with Position-Normal Consistency
- G. Nicolai. Leveraging Inflection Tables for Stemming and Lemmatization
- B. Noori. Identifying mappings between development tasks & software libraries from crowd-sourcing websites