Themes from AZCALL & Carly’s Current Research

Arizona State Flag

Recently, I attended a small conference called AZCALL 2018 hosted by the CALL Club of Arizona State University. This one-day conference was planned by the graduate students in the CALL Club at ASU for the first time, anticipating about 60 people to attend.  To their surprise, actual registrations doubled that number!  The best part of attending small conferences like this one is that they are usually highly impactful without being overwhelming. So I’m still jazzed about some of the topics discussed!

The conference opened with a Keynote by Jonathon Reinhardt, Associate Professor of English at the University of Arizona, about the potential of using multiplayer games for second language learners. If you go to his page, you’ll see his recent research focuses on the use of games and gameful educational techniques, which have been very hot topics in both second language pedagogy and instructional design circles.

Aside from the now common theme of games for education, game-based learning and gamification, virtual and augmented reality were represented in presentations by Margherita Berti, Doctoral Candidate at the University of Arizona and the ending keynote by the always energetic Steven Thorne, among others.  Berti won the conference award for best presentation when she spoke about how she uses 360º YouTube videos and Google Cardboard to increase cultural awareness in her students of Italian.  Check out her website for more of her examples, Italian Open Education.

My personal favorite presentation was given by Heather Offerman from Purdue University, who spoke about her work on using visualization of sound to give pronunciation feedback to Spanish language learners (using a linguistics tool called Praat).  Her work is very close to some of the research I’m doing into the visualization of Chinese tones with Language Lesson, so I was excited to hear about the techniques she was using and how successful she feels they were as pedagogical interventions.  It’s interesting that in the last few CALL conferences I’ve attended, there have started to be more presentations on the need for more explicit and structured teaching of L2 pronunciation in particular, which could appear to be in contrast with the trends for teaching Comprehensible Input (check out this 2014 issue of The Language Educator by ACTFL for more info on CI).  But I argue that it’s possible – and possibly a good idea – to integrate explicit pronunciation instruction along with the CI methodology to get the best of both worlds.  Everything in moderation, as my mom would say.

Just like with all things, there is no silver bullet technology for automatically evaluating student L2 speech and providing them with the perfect feedback to help them improve. Some have been focusing on the use of Automatic Speech Recognition (ASR) technologies and have been using them in their L2 classrooms.  However, the use of ASR is founded on the premise that if the machine can understand you then your pronunciation is good enough.  I’m not sure that’s the bar that I want to set in my own language classroom, I’d rather give the students much more targeted feedback on the segmentals of their speech that not only help them notice where their speech might differ from the model, but also to notice important aspects of the target language to gain better socio-cultural understanding of verbal cues.

That is why I have been working on developing pitch visualization component of Language Lesson. The goal is to help students who struggle with producing Chinese tones properly notice the variance between their speech and the model they are repeating by showing them both the model and their own pitch contours. Soon, I hope to have a display that will overlap the two pitch contours so that students can see very clearly the differences between them. Below are some screenshots of the pitch contours that I hope to integrate in the next 6 months.

I’m looking forward to spending part of this winter break working on a research project to assess the value of pitch contour visualization for Chinese L2 learners.  I will be collecting the recordings I’ve been capturing for the past two years and producing a dataset for each group of students (some of whom had the pitch visualization and some who did not). I will be looking to see if there are differing trends in the students’ production of Chinese tones amongst the different treatment groups. Below are just a few of the articles that I’ve read recently that have informed my research direction.  It should be exciting work!

Elicited Imitation Exercises

Vinther, T. (2002). Elicited imitation:a brief overview. International Journal of Applied Linguistics, 12(1), 54–73. https://doi.org/10.1111/1473-4192.00024

Yan, X., Maeda, Y., Lv, J., & Ginther, A. (2016). Elicited imitation as a measure of second language proficiency: A narrative review and meta-analysis. Language Testing, 33(4), 497–528. https://doi.org/10.1177/0265532215594643

Erlam, R. (2006). Elicited Imitation as a Measure of L2 Implicit Knowledge: An Empirical Validation Study. Applied Linguistics, 27(3), 464–491. https://doi.org/10.1093/applin/aml001

Chinese Tone Acquisition

Rohr, J. (2014) Training Naïve Learners to Identify Chinese Tone: An Inductive Approach in Jiang, N., & Jiang, N. (Ed.). Advances in Chinese as a Second Language: Acquisition and Processing. (pgs 157 – 178). Newcastle-upon-Tyne: Cambridge Scholars Publishing. Retrieved from http://ebookcentral.proquest.com/lib/carleton-ebooks/detail.action?docID=1656455a”]

**cross-posted from Carly’s blog, The Space Between.

Through the looking glass: Adventures with the Hololens

This blogpost has been a long time coming. I have meant to write about our ongoing Hololens developments for some time. I wanted to start by saying, even after over a year with the Hololens, it still really excites me over all of the other VR/AR technology currently available. Since I last posted we have purchased three more Hololens. This expansion was to enable multi-user experiences, something which I think makes the Hololens and AR stand out from VR in a classroom environment. These extra Hololens have helped me to work on two fascinating projects; Spectator-view and Share Reality view, both utilizing multiple units.

Spectator-View

We have had the Hololens for over a year now and only have one video demonstrating it. This is due to how difficult it is to record the AR via the Hololens. Microsoft thought of this and created Spectator-View. The spectator-view allows you to plug in a digital camera and Hololens into a computer and stitch together the images from both. This means you can record the Hololens at much higher resolution. But to do this, you need a second Hololens and a mount to hold it onto the digital camera. So second Hololens, check, Hololens mount, check (see the picture, I 3D printed one over the summer). Now came the hard part. Although Microsoft has created the software for Spectator-View, they don’t package it up in a nice easy application. You have to build it yourself via the source code. After a few hours of debugging, I finally got all of the required applications working. This is our current setup.

top view of Hololens on plastic mount
Hololens sitting on 3d printed mount

I am looking forward to making some new Hololens videos.

Share Reality view

The second package I have been working on is a shared reality experience where the users get to explore an archaeology site, Bryn Celli Ddu, and its associated data. Similar to the spectator view, Share Reality allows each Hololens user to see the same hologram within the same space. This will enable us to create shared experiences, for teaching this is a vital tool. Being able to all see and interact with the same object within in the same space. This adds a whole new level to AR allowing for more social interaction, not isolating the user in their own `realities’ like VR or single user experiences.

This share reality experience was demoed at GIS day.

Student Post: Adam Kral on AR and VR Development

Guest post by Adam Kral (’20) on his summer work for Academic Technology.

So far over the summer I have been working on two projects: an augmented reality app to display images related to Buddhism and a sky diving simulator in virtual reality. Both projects have been built using the Unity game engine. The Buddhism app started with a two-dimensional slider that manipulated an image above it, as shown below.

screenshot of Buddhism app in development

I then converted this app to use augmented reality using AR Toolkit 5. When the camera is shown the background image, the images are now shown in three-dimensional space. The slider has been replaced with a joystick to manipulate the images. The finished product is shown below.

screenshot of Buddhism time app at end of phase 1

In addition to this AR app, I have been building a virtual reality sky diving simulator for the HTC Vive. The player controls their drag, x-y movement, and rotation via the movement of the controllers. This movement is tracked by determining the controllers’ positional relation to the headset. There is still work that needs to be done, such as adding colliders and textures to buildings. Some screenshots from inside the headset are below.

Hello! What’s New in AT Social Media

Welcome! If you’ve been following this blog, you may notice things look a little different. We’ve updated to a brighter, friendlier theme with some features that help this site serve as a better resource. If you are just finding this blog for the first time: great timing! We’re glad to have you join us.

At the top of the main page, you’ll see a set of six “featured” posts. These will rotate pretty regularly, but always contain useful or pertinent information beyond our normal posts. Below the featured posts are our standard posts, filled with articles on useful technology, pedagogical musings, interesting projects, features on the team, and more.

Screenshot of blog

Each post, in addition to its content, will have a set of “tags” off to the left of the post. If you’re interested in a particular topic that is tagged, you can click on a tag and see all posts related to that topic. There is also a search box near the top to help you find posts as well.

 

Screenshot of blog post

 

This blog is just one component of our greater social media presence. We’re active on our Facebook page, our Twitter account, and we recently added an Instagram account. Connect with us on one or all, and let us know what you want to see or learn about.

So it begins… 3D Printing in the IdeaLab

Carleton's newest 3D printer: Ultimaker2+ 3D Printer

As the students return to Carleton and campus life resumes in earnest, you may notice some changes in the IdeaLab and the AT offices in the Weitz Center for Creativity (not to mention the massive construction project just outside…). The IdeaLab has been undergoing renovations and redesigns to better serve the whole community. We’ll be writing another post about that whole process, but for this post I’ll be focusing on one of our newest tools: our 3D printer. This post will also focus primarily on our initial prints, rather than how-tos, but those will also be coming in the future.

After a lot of consideration, talking with experts, and looking at samples, we decided to go with an Ultimaker2+, one of the most highly-regarded 3D printers on the market. It’s a very dependable, well-supported machine, and looks fantastic too.

Our new 3D printer! #ifyoubuildittheywillcome #wehavethetechnology #ultimaker2 #filementaryMyDearWatson

A photo posted by CarletonAcademicTechnology (@carletonacademictechnology) on

As part of the initial set-up, we needed to calibrate and configure the machine. This took a few hours, as the build plate (the section that the 3D printer prints onto) needs to be perfectly level. This level of specificity goes beyond the standard bubble level; we were dealing with differences in size of less than the thickness of a piece of computer paper. With our filament loaded and the plate leveled, we printed our first test print: a little robot designed by Ultimaker.

The first 3D print from our new 3D printer (an Ultimaker2+) #3dprinting #ultimaker2 #idealab

A photo posted by CarletonAcademicTechnology (@carletonacademictechnology) on

With our 3D printer working, we decided to test another file directly from Ultimaker, a little heart keychain.

We ❤ our new 3D printer! #3dprinting #heartintherightplace #ultimaker2

A photo posted by CarletonAcademicTechnology (@carletonacademictechnology) on

After that success, Andrew had me get a large file off of Thingiverse to print. Thingiverse is an online community where people upload 3D files for others to download, modify, and print. It can be a rabbithole for time, as there is so much incredible content available to browse and look through. I ended up choosing an owl pen holder. You can see it on Thingiverse by clicking here. This was addicting to watch the 3D printer layer up piece by piece, so we set up our timelapse camera to shoot the print. Check it out below! (For reference, this five-inch-tall owl took about 27 hours to print, as each layer is less than the width of a human hair in thickness.)

Here’s what the final product looks like. It’s surprisingly sturdy and solid.

"Who" loves 3D printing…? We do! #giveahoot #3dprinting #ultimaker2 #hoursoffun #punstoppable

A photo posted by CarletonAcademicTechnology (@carletonacademictechnology) on

Stay tuned for more posts about the IdeaLab, our 3D printer, and more!