0
0
Unityframework~15 mins

Touch input basics in Unity - Deep Dive

Choose your learning style9 modes available
Overview - Touch input basics
What is it?
Touch input basics in Unity refer to how a game or app detects and responds to finger touches on a touchscreen device. It involves recognizing when and where the screen is touched, how many fingers are used, and how they move. This allows developers to create interactive experiences that feel natural on phones and tablets. Unity provides built-in tools to handle these touch events easily.
Why it matters
Without touch input, mobile games and apps would be hard to control and less engaging, as users rely on their fingers to interact directly with the screen. Touch input solves the problem of translating finger movements into actions inside the app, making the experience intuitive and responsive. Without it, users would have to rely on less natural controls like buttons or keyboards, reducing usability and fun.
Where it fits
Before learning touch input basics, you should understand Unity's basic scripting and game object concepts. After mastering touch input, you can explore advanced gestures, multi-touch interactions, and integrating touch with UI elements for richer user experiences.
Mental Model
Core Idea
Touch input in Unity is about detecting finger touches on the screen and turning them into meaningful actions inside your game or app.
Think of it like...
Imagine the touchscreen as a magic window that senses where your fingers press, slide, or tap, like pressing buttons or moving pieces on a board game.
┌─────────────────────────────┐
│       Touch Input Flow       │
├─────────────┬───────────────┤
│ Touch Start │ Finger touches│
│             │ screen        │
├─────────────┼───────────────┤
│ Touch Move  │ Finger moves  │
│             │ on screen     │
├─────────────┼───────────────┤
│ Touch End   │ Finger lifts  │
│             │ off screen    │
└─────────────┴───────────────┘
Build-Up - 7 Steps
1
FoundationUnderstanding Touch Input Basics
🤔
Concept: Learn what touch input means and how Unity detects finger touches on a screen.
Touch input means the screen senses when and where your finger touches it. Unity uses the Input.touches array to keep track of all fingers currently touching the screen. Each touch has properties like position, phase (start, move, end), and fingerId to identify it.
Result
You can detect when a finger touches the screen and get its position.
Understanding that touches are tracked as objects with phases helps you know how to respond to different touch moments.
2
FoundationUsing Input.touches to Detect Fingers
🤔
Concept: Learn how to access and read the touches array in Unity scripts.
In Unity, Input.touches is an array containing all current touches. You can loop through it to check each finger's position and phase. For example, Input.touches[0] gives the first finger touching the screen. The TouchPhase enum tells if the finger just started touching, moved, or ended.
Result
You can write code that reacts when a finger touches, moves, or lifts off the screen.
Knowing how to read the touches array is the foundation for all touch-based interactions.
3
IntermediateHandling Touch Phases Correctly
🤔Before reading on: do you think a touch's position changes during all phases or only during movement? Commit to your answer.
Concept: Learn the meaning of different touch phases and how to use them to respond properly.
TouchPhase has several states: Began (finger just touched), Moved (finger moved), Stationary (finger held still), Ended (finger lifted), and Canceled (system interrupted). Handling these phases lets you know when to start an action, update it, or finish it.
Result
Your app can respond smoothly to finger actions, like starting a drag on Began and updating position on Moved.
Understanding phases prevents bugs like reacting to a touch too early or missing finger lifts.
4
IntermediateTracking Multiple Fingers with fingerId
🤔Before reading on: do you think fingerId changes during a touch or stays the same? Commit to your answer.
Concept: Learn how to identify and track each finger separately using fingerId.
Each touch has a fingerId that stays the same while the finger is on the screen. This lets you track multiple fingers independently, like for pinch zoom or multi-finger gestures. You store fingerId and match it in later frames to continue tracking the same finger.
Result
You can handle multi-touch gestures by tracking each finger's movement separately.
Knowing fingerId is key to managing complex gestures involving several fingers.
5
IntermediateConverting Touch Positions to World Space
🤔
Concept: Learn how to translate screen touch positions into game world coordinates.
Touch positions are given in screen pixels. To interact with game objects, you convert these positions using Camera.ScreenToWorldPoint. This lets you know where in the game world the finger touched, enabling object selection or movement.
Result
Your game can respond to touches by moving or selecting objects in the scene.
Understanding coordinate conversion bridges the gap between screen input and game logic.
6
AdvancedOptimizing Touch Input for Performance
🤔Before reading on: do you think checking all touches every frame is always efficient? Commit to your answer.
Concept: Learn how to handle touch input efficiently to keep your game smooth.
Checking every touch every frame can be costly, especially with many fingers or complex logic. Use early exits when no touches exist, cache fingerId tracking, and avoid heavy calculations inside touch loops. Also, handle touch input only when needed, like during gameplay, to save resources.
Result
Your app runs smoothly without lag caused by touch input processing.
Knowing how to optimize touch handling prevents performance drops on mobile devices.
7
ExpertHandling Edge Cases and Platform Differences
🤔Before reading on: do you think touch input behaves identically on all devices? Commit to your answer.
Concept: Learn about tricky cases like touch cancellations, multi-touch quirks, and platform-specific behaviors.
Some devices may cancel touches unexpectedly (TouchPhase.Canceled), or report touches differently (e.g., stylus vs finger). Also, gestures like double-tap or long press require timing logic beyond basic phases. Handling these requires careful coding and testing on multiple devices to ensure consistent behavior.
Result
Your app handles touch input reliably across devices and unusual user actions.
Understanding platform quirks and edge cases is essential for professional-quality touch input.
Under the Hood
Unity's touch input system reads data from the device's touchscreen hardware each frame. It tracks each finger's position and state, updating the Input.touches array accordingly. Internally, the OS sends raw touch events, which Unity translates into Touch objects with properties like position and phase. This data is then accessible in scripts every frame for real-time interaction.
Why designed this way?
This design abstracts complex hardware differences into a simple, consistent API. It allows developers to write code once and have it work on many devices. The phase system models the natural lifecycle of a touch, making it intuitive to respond to user actions. Alternatives like event-driven models were less flexible for games needing frame-by-frame updates.
┌───────────────┐
│ Touchscreen   │
│ Hardware      │
└──────┬────────┘
       │ Raw touch events
       ▼
┌───────────────┐
│ Operating     │
│ System Driver │
└──────┬────────┘
       │ Processed touch data
       ▼
┌───────────────┐
│ Unity Engine  │
│ Input System  │
│ (Input.touches)│
└──────┬────────┘
       │ Touch objects each frame
       ▼
┌───────────────┐
│ Developer     │
│ Scripts       │
└───────────────┘
Myth Busters - 4 Common Misconceptions
Quick: Do you think Input.touches always contains all fingers that ever touched the screen? Commit to yes or no.
Common Belief:Input.touches contains every finger that has touched the screen since the app started.
Tap to reveal reality
Reality:Input.touches only contains fingers currently touching the screen during the current frame.
Why it matters:Assuming old touches remain leads to bugs where code reacts to fingers no longer present, causing unexpected behavior.
Quick: Do you think fingerId changes if you lift and touch again with the same finger? Commit to yes or no.
Common Belief:fingerId is unique per finger and stays the same even after lifting and touching again.
Tap to reveal reality
Reality:fingerId is unique per touch session and changes each time the finger touches the screen anew.
Why it matters:Misusing fingerId causes confusion in tracking fingers across multiple touches, breaking multi-touch gestures.
Quick: Do you think TouchPhase.Moved means the finger always changed position? Commit to yes or no.
Common Belief:TouchPhase.Moved means the finger moved to a new position every frame.
Tap to reveal reality
Reality:TouchPhase.Moved means the finger position changed since the last frame, but small or no movement can still report Moved due to hardware sensitivity.
Why it matters:Assuming Moved always means significant movement can cause jittery or overly sensitive responses.
Quick: Do you think touch input works exactly the same on all platforms Unity supports? Commit to yes or no.
Common Belief:Touch input behaves identically on iOS, Android, and other platforms.
Tap to reveal reality
Reality:Different platforms have subtle differences in touch reporting, timing, and gesture recognition.
Why it matters:Ignoring platform differences can cause inconsistent user experiences and bugs on some devices.
Expert Zone
1
Touch input phases can overlap in complex ways, requiring careful state management to avoid missed or duplicated events.
2
Some devices support stylus input which may appear as touch but have different properties; handling these separately can improve precision.
3
Touch input can be combined with accelerometer or gyroscope data to create richer interaction models, like tilt-based controls.
When NOT to use
Touch input basics are not suitable for desktop or console games without touchscreens. For those, use mouse or controller input systems instead. Also, for complex gestures, consider using Unity's Gesture Recognizer plugins or third-party libraries that provide higher-level abstractions.
Production Patterns
In production, touch input is often wrapped in input manager scripts that abstract finger tracking and gesture detection. Developers debounce touch events to avoid accidental taps, combine multi-touch for pinch and rotate gestures, and integrate touch with UI event systems for seamless user interaction.
Connections
Event-driven programming
Touch input phases resemble event states in event-driven systems.
Understanding touch phases as events helps grasp how user actions trigger changes in app state, similar to button clicks or network events.
Human-computer interaction (HCI)
Touch input is a core part of HCI, focusing on how humans interact with devices.
Knowing HCI principles helps design touch interactions that feel natural and reduce user frustration.
Signal processing
Touch input data can be noisy and requires filtering, similar to signal processing techniques.
Applying smoothing or debounce logic to touch data improves responsiveness and user experience.
Common Pitfalls
#1Ignoring touch phases and reacting to all touches the same way.
Wrong approach:foreach (Touch touch in Input.touches) { Vector2 pos = touch.position; // Move object regardless of phase MoveObjectTo(pos); }
Correct approach:foreach (Touch touch in Input.touches) { if (touch.phase == TouchPhase.Began || touch.phase == TouchPhase.Moved) { Vector2 pos = touch.position; MoveObjectTo(pos); } }
Root cause:Not distinguishing touch phases causes actions to trigger at wrong times, leading to jittery or unintended behavior.
#2Assuming fingerId stays the same across multiple touches by the same finger.
Wrong approach:int trackedFinger = 0; // Using trackedFinger forever without updating foreach (Touch touch in Input.touches) { if (touch.fingerId == trackedFinger) { // Process touch } }
Correct approach:int trackedFinger = -1; // Assign trackedFinger when touch begins foreach (Touch touch in Input.touches) { if (touch.phase == TouchPhase.Began) { trackedFinger = touch.fingerId; } if (touch.fingerId == trackedFinger) { // Process touch } if (touch.phase == TouchPhase.Ended) { trackedFinger = -1; } }
Root cause:Misunderstanding fingerId lifecycle causes tracking errors and lost touch references.
#3Using touch.position directly without converting to world coordinates.
Wrong approach:Vector2 touchPos = Input.touches[0].position; // Using touchPos directly to move game objects object.transform.position = new Vector3(touchPos.x, touchPos.y, 0);
Correct approach:Vector2 touchPos = Input.touches[0].position; Vector3 worldPos = Camera.main.ScreenToWorldPoint(new Vector3(touchPos.x, touchPos.y, cameraDistance)); object.transform.position = worldPos;
Root cause:Confusing screen space with world space causes objects to move incorrectly or off-screen.
Key Takeaways
Touch input in Unity tracks finger touches as objects with phases like Began, Moved, and Ended to represent the lifecycle of a touch.
Each finger is identified by a fingerId that is unique per touch session, allowing multi-touch gestures to be handled properly.
Touch positions are given in screen pixels and must be converted to world coordinates to interact with game objects accurately.
Handling touch input efficiently and accounting for platform differences ensures smooth and consistent user experiences.
Understanding touch phases and finger tracking prevents common bugs and enables building rich, natural touch interactions.