Curious Goat

Investigative, Exploratory, Experimental
Experimental projects on the path of self development that runs alongside the professional work.
Kinaesthetic learning style of exploring, learning and testing the limits within a frame with my colleagues. And also my spirit animal is goat.
Slinger (2020)
Aim
An opportunity for designers and developers to familiarize themselves with MagicLeap, a mixed reality device, while developing an experience that encourages colleagues to delve into the world of MR.
Who?
A game developed to educate and entertain colleagues at TAKELEAP about MagicLeap, while help establish a system of communication for this new platform between the developers and designers.
Contribution
Me: Modelling, Concept development, Game narrative.
Yadav Raj : Modelling, Concept, development.
Kalyanakrishnan : Rigging and animation.
Technology
Magic Leap 1 model with a single controller and hand tracking.
Blender for 3D models.
Developed using Unreal.
Intent
We aimed to exploring with a focus on asset format, properties, UV mapping, vertex and poly count, effective hand recognition, animation, and scaling. However, it was crucial for the results to also address resistance among individuals in our office environment when it comes to using mixed reality (MR) devices.
Observation
Identifying an opportunity for intervention, the gathering area centred around Table Tennis (TT) emerged as a notable socialising spot that attracted both experienced and novice players. In order to foster greater participation and facilitate enhanced interactions, the concept of a multiplayer mixed reality (MR) game was proposed.
Proposal
A game that encourages movement, hand tracking, humour, interaction and exploratory sides of people in order to traverse through the levels and barriers. A stationary field of play, that could be spawned over a TT table and the players moving around it to experience the game.
Observation
Identifying an opportunity for intervention, the gathering area centred around Table Tennis (TT) emerged as a notable socialising spot that attracted both experienced and novice players. In order to foster greater participation and facilitate enhanced interactions, the concept of a multiplayer mixed reality (MR) game was proposed.
The Storyline
The experience starts with a brief background story and context of the gameplay. The journey takes the user from the Earth to a mysterious island among the clouds named Laputa (inspired from the book Gulliver’s Travels by Jonathan Swift). The player(s) takes the role as a guardian of the island to save ‘the town center’ from the villains that emerge from the base of the ‘Tri-mountains’.
Click to play
The above narration is the introduction to the game and now the player can start playing the game by using the slingshots to fire the garlic at the red witches to defend the city.
The plot
Micro interactions
Apart from the standard animated objects like windmill and clouds, Laputa has interactive fun elements of surprise that elevated the immersion of the experience such as:
  • When the garlic hits the cloud, it starts raining.
  • If it hits the witch then they blow up into fire particles.
  • If it hits the cattle then they make sounds and run to another place.
  • If they hit no particular object. It grows into a huge plant that creates more obstruction.



Testing
The initial plan was to use the hand recognition instead of the controller to propel the garlic with the sling, but while testing we realized that the hand recognition might only work within the cone of vision of the device which restricts the movement. To resolve this, the catapult was fixed to the ground and the garlic could be spawned using the trigger and could be propelled upon release.
Epilogue
This project gave a great opportunity for me to explore and develop my 3D modelling skills using Blender. Exploring the experimental phases while learning about something as new as designing for MagicLeap excited me the most. I wish to test and iterate the project as conceived, into development.
Virtual Keyboard (2019)
Aim
The need for this study starts with the disruption in immersion when it comes to text input in Virtual Environments (VE). To arrive at a suitable mechanism to input text in VEs
Intent
The need for this experiment arose while looking at options for inputting user credentials at the beginning of a VR training and simulation.

The priorities were to develop a model of text input for VE that is versatile, intuitive and reduces the errors.
Contribution
Me: Research, testing, 3D modelling, Concept development
Yadav Raj : Development, Concept
Raja : 3D modelling
Versatile - Independent hardware restrictions
Intuitive - User shouldn’t take much time to get acquainted/familiar with the input method and also should feel natural at the same time
Reduce errors - The frequency of the errors would also be taken into consideration as it ultimately increases the time taken and also reduces the user’s interest to complete the task in hand.
Current techniques and their drawbacks
Popular
Laser pointer: This is the most common technique where a model of the QWERTY keyboard displayed and a beam of laser line is projected from the controller onto the target. The further the distance of the target the more shaky the beam can get.
.
Audio: Audio input is the most intuitive among all the techniques. With the hold of a button one can input the text across. But due to the presence of varied accents, pronunciation and non-inclusivity in the data based upon the software is created, the error rate associated with voice recognition is high in any platform let alone VEs.
Swipe text: As an Android user, I have always used swipe gesture to input text. But in a VE this is just an add-on feature instead of a norm. Additionally, in the context of this project, swipe feature would be highly unsuitable for entering password characters.
Queer
Pinch Keyboard: This technique uses QWERTY layout and Pinch GlovesTM with conductive cloth on each fingertip that senses when two or more fingers are touching. Each row of the keyboard is divided into 2 sets for both the hands. Each letter in a particular row is represented by the different finger. eg. 'a'-pinky; 's'-ring; 'd'-middle and so on.
.
Pan & Tablet: In this metaphor for actual pen and tablet, the user can see a virtual version of the physical stylus(tracked) and the tablet. Here for every letter entry, the user points and clicks on each letter visible to them in the VE. This method proved out to be much more efficient than the following Twidller 2's performance.
Twidller 2: This is a hand-held chorded keyboard which is for one hand use. Alphabets are inputted by pressing down one or a combination of multiple keys (just like guitar chords) which is visible to the use within the virtual world too.
SWIFTER: A speech-based multimodal interaction approach where after each sentences, the user is allowed to edit any mistakes done during the speech to text conversion. The aim also was to reduce the learning curve and make experience intuitive.
This chart represents avg. speeds over many trails to illustrate the learning curve whilst using each of the techniques. In here chorded keyboard is TWIDDLER2.
Comfort rating among the participants for the same individual techniques.
Google Daydream
Something about this technique caught my interest. But unfortunately since it was taken down I wasn’t able to try it out personally. This inspired me to make my own version of it.It is quiet simple and intuitive that it almost looks like playing drums. After a few iteration the following was modelled and developed.
My tutor once said
“The best first thing you could do is to copy what exists already”
What was I interested in testing
Creating a virtual keyboard mimicking Google’s Daydream keyboard. The reasons were:
That it would feel like playing drums.
The position of the hand can be at a comfortable level.
The physics of the real-world keyboard could be used here for action feedback.
Time to test
The subjects to test this out were chosen carefully. The participants were from a wide range of physical keyboard typing speed. The amount of exposure to virtual reality also varied among participants.
Observation
On contrary to the assumption, half of the participants only used one hand to type. More interestingly, those half where the ones who had higher typing speed both in physical and virtual keyboards.

It would have been better to have a visual and sensory feedback (such as push-back, controller vibrations) as the keys were typed.

The layout could have been more spaced out as it caused constant collision error.
Now the numerical data derived from these trails were pulled against the data collected on their individual performance in typing using physical keyboard.
Epilogue
Improvements such as visual feedback using colour of the key to represent the different states or/and being able to push the button back feedback mimicking the physical keyboard.
Future plans are to try and test a version of the pinch keyboard using hand gesture recognition and to study their pros and cons.