Unity Input Event Handlers - Or Adding Juice the Easy Way

Need to add a little juice to your UI? Maybe your player needs to interact with scene objects? Or maybe you want to create a dynamic or customizable UI? This can feel hard or just plain confusing to add to your project. Not to mention a lot of the solutions out there are more complex than they need to be!

Using Unity’s Event Handlers can simplify and clean up your code while offering better functionality than other solutions. I’ve seen a lot of solutions out there to move scene objects, create inventory UI, or make draggable UI. Many or maybe most of those solutions are overly complicated because they don’t make full use of Unity’s event handlers (or the Pointer Event Data class).

Did I mention these handlers work with both the “new” and the “old” input systems. So learn them once and use them with either system.So let’s take a look at what they can do!

If you just want to see the example code, you can find it here on GitHub.

Input Event Handlers

Event handlers are added by including using UnityEngine.EventSystems and then implementing one or more of the interfaces. For example, IPointerEnterHandler will require an OnPointerEnter function to be added. No surprise - this function will then get called when the point enters (the rect transform of) the UI element.

The interfaces and corresponding functions work on scene objects. But! The scene will need a camera with a physics raycaster and more on that as we move along.

Below are the supported events (out of the box) from Unity:


Example Disclaimer

The examples below are intended to be simple and show what CAN be done. There will be edge cases and extra logic needed for most implementations. My hope is that these examples show you a different way to do some of these things - a simpler and cleaner way. The examples also make use of DoTween to add a little juice to the examples. If you’re not using it, I’d recommend it, but it’s optional all the same.

Also in the examples, each of the functions being used corresponds to an interface that needs to be implemented. If you have the function, but it’s not getting called double check that you have implemented the interface in the class.


UI Popup

A simple use case of the event handlers is a UI popup to show the player information about an object that the pointer is hovering over. This can be accomplished by using the IPointerEnter and IPointerExit interfaces. For my example, I choose to invoke a static event when the pointer enters the object (to open a popup) and when the pointer exits (to close the popup). Using events has the added benefit that other systems beyond the popup menu can also be aware of the event/action - which is huge and can allow more polish and juice to be added. It also means that information about the event and the object can be passed with the event.

In my particular case, the popup UI element is listening to these events and since the PointerEventData is being passed with the event, the popup UI element can appear on-screen near the object. In my case rather than place the popup window at the same location as the pointer I’m using a small offset.

This code is placed on objects - to enable popup

Physics Raycaster

If you want or need the Event Handlers to work on scene objects (like the example above) you will need to add a Physics Raycaster to your camera.

This is pretty straight forward, with the possible exception of the layer mask. You will need to do some sorting of layers in your scene and edit the layer mask accordingly if you are getting unwanted interactions.

For example in my project game units have a “Unit Detection” object on them which includes a large sphere collider. This is used to detect opposing units when they get close. The “Unit Detection” object is on a different layer to avoid unwanted interactions between scene objects. In my case, I also wanted to turn off this layer in the physics raycaster layer mask - as the extra colliders were blocking the detection of the pointer on the small collider surrounding the actual unit.

This code is placed on the popup window itself


Drag and Drop

This came up in my Grub Gaunlet game from a tester. Originally I had buttons at the top that when you clicked them a new game element appeared in the middle of the screen. This worked and was fine for the game jam, but being able to drag and drop the object is more intuitive and feels a whole lot better. So how do you do that with a button (or image)? Three event handlers make this really easy.

This goes on the UI element and needs to have the prefab variable set in the inspector

First, when the pointer is down on the UI element a new prefab instance is created and the “objectBeingPlaced” variable is set. Setting this variable allows us to track and manipulate the object that is being placed.

Then when the pointer comes up objectBeingPlaced is set to null to effectively place the object.

But the real magic here is in the OnUpdateSelected function. This is called “every tick” - effectively working as an update function. To my understanding, this is only called while the object is selected - so this is no longer called once the pointer is up or at the very least when the next object is selected. I haven’t done any testing, but I’d guess there are slight performance gains using this approach vs. an update function on each button. Not to mention this just feels a whole lot cleaner.

Inside the OnUpdateSelected function, we check if objectBeingPlaced is null, if it’s not then we want to move the object. To move it we’re going to do some raycasting. To keep things simple, I’ll create a plane and raycast against it. This limits the movement to the plane, but I think that’ll cover most use cases.

This is SO much simpler and cleaner than what I’ve done in the past.

If you haven’t seen the Plane class, I just discovered it a few weeks back, the plane is defined by a normal vector and a point on the plane. It also has a built in raycast function which is much simpler to use than the physics raycaster - albeit also more limited in functionality.


Double Click

How about a double click? There are a LOT of solutions out there that are way more complex than what appears to be needed. All kinds of coroutines, updates, variables…. You just don’t need it. Unity gives us a built-in way to register click count. So let’s make use of it.

The real star of the show in the code is the OnPointerClick function and the PointerEventData that is passed into the function. here all we need to do is check if eventData.clickCount is equal to 2. If it is then there was a double click.

Could it be much easier?

In addition, this should work with UI and scene objects (need a physics raycaster) equally well.

The rest of the code presented just adds a bit of juice and some player feedback. We cache the scale of the object in the Start function. Then when the pointer enters the object we tween the scale up and likewise when the pointer exits we tween the scale back down to its original size.

As a side note registering the double click did not work for me with the new input system version 1.0.2. An update to 1.3 fixed the issue. There was no issue with the “old input system.”


Moving Scene Objects

Okay, so what if you want to move an object around in the scene, but that object is already in the scene? This is very similar to the example above, however (in my experience) we need an extra step.

We need to set the selected gameObject - without doing this the OnUpdateSelected function will not get called as the event system doesn’t seem to automatically set a scene object as selected.

Setting the selected object needs to happen in the OnPointerDown function. Then in the OnPointerUp function, the selected object gets set to null - this prevents any unwanted interactions from the object still being the “selected” object.

The other bit that I’ve added is the OnCancel function (and interface). This gets invoked when the player presses the cancel button - which by default is set as the escape key. If this is pressed I return the gameObject to its starting location and again set the selected object to null. This is a “nice to have” and really easy to add.

Dragging UI Objects

Who doesn’t like a draggable window? Once again these are easy to create using a handful of event handlers.

hierarchyLet’s get right to the star of the show, which is the OnBeginDrag and OnDrag functions. When the drag begins we want to calculate an offset between the pointer and the location of the object. This prevents the object from “snapping onto the pointer” which doesn’t feel great doubly so if the object is large.

Next, we need to set the object to be the last sibling. Since UI objects are drawn in the order that they are in the hierarchy this helps to ensure the object being dragged is on top. If you have a more complex UI structure you may need to get more clever with this and change the parent transform as well (we do this a bit in the next example).

Then!

In the OnDrag function, we simply we simply set the position (excuse the typo - no need for the double transform call) to the position of the pointer minus the offset. And that’s all it takes to drag a UI object.

But! I did add a bit more juice. The OnPointEnter and OnPointer Exit functions tween the scale of the object to give a little extra feedback. Then in OnEndDrag I play a simple SFX to give yet a bit more polish.

Drag and Drop “Inventory”

There is a Unity package with this prefab in the Github repo (link at the top)

Creating a full inventory system is much more complicated than this example. BUT! This example should be a good foundation for the UI part of an inventory system or a similar system that allows players to move UI objects. That said this is definitely the most complex of all the examples and it requires two classes. One is on the moveable object and the other is on the slot itself.

The UI structure also requires a bit of setup to work. In my case, I’ve used a grid (over there —>) with white slots (image) to drop in an item. The slots themselves have a vertical layout group - this helps snap the item into place and makes sure that it fills the slot.

Basic Setup of the Inventory Slot Object

Inventory Slot Component

The slots also have the “Inventory Slot” component attached. This is the simpler of the two bits of code so let’s start there.

The inventory slot makes use of the IDropHandler interface. This requires the OnDrop function - which gets called when another object gets dropped on it. In this case, all we want to do is set the parent of the object being dragged to the slot it was dropped on. And thankfully our event data has a reference to the object being dropped - once again keeping things clean and simple.

There are a ton of edge cases that aren’t addressed with this solution and are beyond the scope of this tutorial. For example: Checking if the slot is full. Limiting slots to certain types of objects. Stacking objects…

Okay. Now the more complicated bit. The inventory tile itself. The big idea here is we want to drag the tile around, keep it visible (last sibling) and we need to toggle off the raycast target while dragging so that the inventory slot can register the OnDrop event. Also, if the player stops dragging the item and it’s not on top of an inventory slot then we’re going to send the item back to its starting slot.

At the top, there are two variables. The first tracks the offset between the item and the pointer, just like in the previous example. The second will track which slot (parent) the item started in.

Then OnBeginDrag, we set the starting slot variable, set the parent to the root object (canvas) and set this object to the last sibling. These last two steps help to keep the item visible and dragging above other UI objects. We then cache the offset and set the raycast target to false. This needs to be set to false to ensure that OnDrop is called consistently on the inventory slot - i.e. it only gets called if the raycast can hit the slot and isn’t blocked by the object being dragged.

An important note on the raycast target: RaycastTarget needs to be set to false for all child objects too. In my case, I turned this off manually in the text object - but if you have a more complex object a Canvas Group component can be used to toggle this property for all child objects.

Moving on to the OnDrag function, this looks just like the example above, where we set the position of the object to the pointer position minus the offset.

Finally, the OnEndDrag function is where we need to toggle the raycastTarget back on so that we can move it again later. Also now that the dragging has ended we want to see if the current parent of the item is an inventory slot. If it is - it’s all good - if not we want to set the parent back to the starting slot. Because of the vertical layout group setting the parent will snap the position of the item back to it’s starting position. It’s worth noting that OnEndDrag (item) gets called after OnDrop (slot) which is why this works.

Note: I also added a SFX to the OnEndDrag. This is optional and can be done in a lot of different ways.

Pointer Event Data

I had hoped to go into a bit more detail on the Pointer Event Data class, but this post is already feeling a bit long. That said there is a ton of functionality in that class that can make adding functionality to Event Handlers so much easier. I’d also argue that a lot of the properties are mostly self explanatory. So I’ll cut and paste the basic documentation with a link to the page here.

Properties

button The InputButton for this event.

clickCount Number of clicks in a row.

clickTime The last time a click event was sent.

delta Pointer delta since last update.

dragging Determines whether the user is dragging the mouse or trackpad.

enterEventCamera The camera associated with the last OnPointerEnter event.

hovered List of objects in the hover stack.

lastPress The GameObject for the last press event.

pointerCurrentRaycast RaycastResult associated with the current event.

pointerDrag The object that is receiving OnDrag.

ointerEnter The object that received 'OnPointerEnter'.

pointerId Identification of the pointer.

pointerPress The GameObject that received the OnPointerDown.

pointerPressRaycast Returns the RaycastResult associated with a mouse click, gamepad button press or screen touch.

position Current pointer position.

pressEventCamera The camera associated with the last OnPointerPress event.

pressPosition The screen space coordinates of the last pointer click.

rawPointerPress The object that the press happened on even if it can not handle the press event.

scrollDelta The amount of scroll since the last update.u

seDragThreshold Should a drag threshold be used?

Public Methods

IsPointerMovingIs the pointer moving.

IsScrollingIs scroll being used on the input device.

Inherited Members

Properties

used Is the event used?

currentInputModule A reference to the BaseInputModule that sent this event.

selectedObject The object currently considered selected by the EventSystem.

Public Methods

Reset Reset the event.

Use Use the event.

*Quitting a Job I Love

This has nothing to do with game development or the OWS YouTube channel. I’m writing this to get my thoughts out. Nothing more. Nothing less.

Here’s how it all turned out

I’m one of those lucky people. I have a job that I love. I really do. It’s an amazing job. I’ve taught just about every level of math from Algebra to Differential Equations. I’ve taught physics, robotics, game design, and an art class using Blender. I’ve spent countless hours each fall riding with and coaching the competitive mountain bike team. I’ve spent many winter days on the ski hill trying to convince students that carving a turn on skis is more fun than just “pointing it.” I’ve helped to build up the robotics team from nothing to a team that is competitive at the state level. Every spring and fall, I’ve packed up a bus full of mountain bikers and headed out on week-long trips to the Colorado and Utah desert or to the beautiful mountains of Crested Butte. It’s an amazing job. I have poured my heart and soul into this school.

I don’t want to quit. But the job has taken a toll. I am tired. I am exhausted. I am burned out.

My school has a policy of not counting hours. There is no year-end evaluation or mechanism for feedback. This means no one knows how hard we actually work. This means there is no limit to how much we work. This means we can be asked to do more at any time with little or no compensation.

The school board is painted a rosy picture by the administration. Most teachers have been here for over a decade and many for over 2 decades. But things are changing. We grumble in private. When we do approach the administration we are told we are doing a good job and this is just what it takes to work at a boarding school (and there is truth to that). But our concerns are wiped away with excessive positivity or seemingly ignored. It doesn’t feel good. At a school that is about community and relationships, there is little to none of that sense of community between the administration and teaching staff.

As a school, we pride ourselves, and justifiably so, on the strong relationships with our students, but after two years of a pandemic, no administrator has truly taken the time to see how I’m doing personally or professionally. They are stressed and overworked too. I think the presumption is if I haven’t quit I’m doing okay.

We are a “family” when the school needs something from us and when we need something from the school we are told we are being “transactional.” We sign a contract in February that binds us to the school until the next June. There is no meaningful negotiation. No way to earn more (beyond our annual 3% raise). No promotions. No way to adjust our workload. No way to move off-campus. The only lever we have to pull to change our situation is to quit. If we do quit, we lose a paycheck, housing, utilities, food, and health insurance. It is terrifying to make a change and few of us do.

During my time here I have seen kids who barely knew how to mountain bike become state champion racers. I’ve seen aimless students discover computer science or physics or art and find a reason to go to college. I’ve seen kids that have been bullied in previous schools find friends and community. I’ve watched countless students discover a sport that has given them confidence and a sense of belonging.

We do amazing things for students and I love being a small part of this school. But like so many schools this work is done on the backs of the teachers.

In many ways, we are a rudderless ship. I can’t tell you the last time I saw an administrator in the classroom building to observe let alone when I last had any meaningful feedback. I couldn’t tell you what the mission and vision of the school are. I can’t tell you the school’s goals - other than to provide for students in any way possible and to fundraise for new buildings. We seem increasingly driven by budget and money. While I’m sure that is not 100% fair or even true, that is what it feels like, and what things feel like can be just as important or even more so than what is actually true.

While there is so much good at our school, there also feels like there is willful blindness to what is not working or feeling good. Throwing spouses off insurance, cancelation of sabbatical, no published pay scale, poor maternity leave, worse paternity leave, ever-increasing expectations and workloads, and most of all the lack of voice. As teachers, as professionals, as members of the community, we want to be heard. We want to have some agency.

Again I love my job. I do. It pisses me off. It makes me angry. But I love it. Like any relationship, it’s flawed. That’s okay. I would love to find a way forward, a way to make the job sustainable and not feel emotionally drained and burned out. But relationships that only go one way are dysfunctional.

I believe there are many at the school who do truly care about staff, but they are overworked and hamstrung by policies that make sense on paper but that forget that we are people, not cogs in a machine.

I have slowly come to peace with the situation. I am not entitled to having the school change. I can’t make the school change. All I can do is control how I react and what I do.

With a tear in my eye and a lump in my throat, I am pulling the only lever I have to pull. I am quitting.

Split Screen: New Input System & Cinemachine

Some Background Knowledge ;)

While networked multiplayer is a nightmare that can easily double your development time local split-screen is much much easier to implement. And Unity has made it even easier to do thanks in big part to Unity’s New Input System and some of the tools that it ships with.

So in this tutorial, we’re going to look at a few things:

  • Using the built-in systems (new input system) to create local multiplayer.

  • Adding optional split-screen functionality

  • Modifying controller code for local multiplayer

  • Responding when players added through C# events

    • Spawning players at different points on the map.

    • Toggling objects

  • Using Cinemachince with split-screen.

We are NOT going to look at creating a player selection screen or menu. But! That is very possible with the system and could be a topic of a future tutorial. There is also a bit of crudeness to how Unity splits the screen. It initially splits left/right not up/down. Also if the split-screen sizes don’t fill the screen which means, for example, 3 player screens will not cover the entire screen. To fix these issues would require some customization that’s beyond the scope of this tutorial.

I’ll be using the character controller from my “Third Person Controller” video for this tutorial. Although any character controller (even just a jumping cube) using the new input system should work equally well. You can find the code for the Third Person Controller here and the code for this tutorial here.

Split Screen in Action

So What is ACTUALLY Happening?

There is a lot going on behind the scenes to create split-screen functionality most of which is handled by two components - Player Input Manager and Player Input. Both of these components ship with the New Input System. While these classes are not simple - 700 and 2000 (!!) lines respectively - the end result is pretty straightforward and relatively easy to use.

The Player Input Manager detects when a button on a new device (keyboard, gamepad, etc) is pressed. When that happens an object with a Player Input component is instantiated. The Player Input creates an instance of an Input Action Asset and assigns the device to that instance.

The object that is instantiated could be the player object but in reality, it’s just holding a reference to the Input Action Asset (via the Player Input component) for a given player and device. So if you do want to allow players to select their character, or perform some other action before jumping into the game, you could connect the character selection UI elements to the Input Action Asset and then when the player object is finally created you connect it to the Input Action Asset. This becomes easier if you create additional action maps - one for selection and one for in-game action.

The Basics

To get things started you’ll need to add in the New Input System through the Unity Package Manager. If you haven’t played with the New Input System, definitely check out the earlier post and video covering the basics.

Here’s what needs to happen:

  1. Add the New Input System to your project

  2. Create an Input Action Asset. (Saved it and generate C# class)

  3. Add the Player Input Manager component to a scene object.

  4. Create a “player” prefab and add the Player Input component.

  5. Assign the Input Action Asset to the Player Input component.

  6. Assign the player prefab to the Player Input Manager component.

With that done, kick Unity into play mode and press a button on your keyboard or mouse. You should see a character prefab get instantiated. If you have a controller press a button on it and another prefab should be created.

In some cases, I have seen Unity treat multiple devices all as one. This occurred when I connected the devices before setting up the Player Input Manager. For me, a quick restart of Unity resolved this issue.

A Little Refinement

I have had some issues with Unity detecting the mouse and keyboard as separate devices. One way to resolve this is by defining control schemes, but I haven’t found the secret sauce to make that work smoothly and consistently. Another way around this is in the Player Input Manager is to set “Join Behavior” to “Join Players When Join Action Is Triggered" and to create a Join action in the Input Action Asset. I set the join action to “any key” on the keyboard and the “start” button on a gamepad.

If you want your players to all play with the same camera, i.e. all have the same view for a co-op style game, then much of the next section can be skipped.

Adding Split Screen

If you want each player to have their own camera for example, in an FPS, the next step is to make sure that the player prefab has a camera component - this is important so that when each player object is instantiated it has it’s own camera object.

The structure of my Player PRefab

In my case, the camera and the player object need to be separate objects and I’d guess this is true for many games. To make this work, simply create an empty object and make the camera and player objects children of the empty. Then create a new prefab from the empty object (with attached children) and reassign this prefab to the Player Input Manager. The Player Input component (in my experience) can go on any object on the prefab - so put it where it makes the most sense to you - I kept mine on the player object itself rather than on the empty parent.

You may have noticed that the Player Input component has a camera object. So on the prefab, assign the camera to the slot. This is needed so the split-screen can be set up correctly.

The last step before testing is to click the “split screen” toggle on the Player Input Manager. If you are using Cinemachine for your camera control, you should still get split-screen functionality, but all the views are likely looking through the same camera. We’ll fix that in a bit.

Connection to the Input Action Asset

Old code is commented out. New code is directly below.

If you’ve been playing around with a player object that has a controller component you may have noticed that all the players are still being controlled by a single device - even if you have split-screen working.

To fix this we need that controller component to reference the Input Action Asset on the Player Input component. To do that we need to change the type of our Input Action Asset from whatever specific type you’ve created, in my case “Third Person Action Asset,” to the more general “Input Action Asset.” We can then get a reference to the Player Input component with GetComponent or GetComponentInChildren depending on the structure and location of your components. To access the actual Input Action Asset we need to add a “dot Actions” to the end.

Now for the messy bit. Since there is no way to know what type of Input Action Asset we’ve created we need to find the Action Maps and individual actions using strings. Yuck. But it works.

We can get references to action maps using FindActionsMaps and references to actions using FindActions. Take care to spell the names correctly and with the correct capitalization. And this is all we need to do. Update the references to the Input Action Asset, Action Maps, and Actions and the rest of your controller code can stay the same.

Give it a quick test and each player object should now be controlled by a unique device.

Reacting to Players Joining

If you want to control where players spawn or maybe turn off a scene overview camera once the first player spawns we’re going to need to add in a bit more functionality. Unity gives us an OnPlayerJoin (and OnPlayerLeft) action that we can subscribe to and allows us to do stuff when a player joins. In addition, the OnPlayerJoin Action sends a reference to the PlayerInput component - which turns out to be very useful.

To make use of this action, we need to change the “Notification Behavior” on the Player Input Manager to “Invoke C Sharp Events.” Unity won’t throw errors if this isn’t set correctly, but the actions won’t get invoked.

Spawn Locations

To demonstrate how to control where players spawn, let’s create a new PlayerManager class. This class will need access to UnityEngine.InputSystem so make sure to add that using statement to the top. The first task is to get a reference to the PlayerInputManager component and I’ve done that with FindObjectOfType. We can then subscribe and unsubscribe from the OnPlayerJoin action. In my case, I’ve subscribed an “AddPlayer” function that takes in the PlayerInput component.

There are several ways to make this work, but I choose to create a list of the PlayerInput components - effectively keeping a reference to all the spawned players - as well as a list of transforms that will function as in-game spawn points. These spawn points can be anything, but I used empty gameObjects.

When a player joins, I add the PlayerInput component to the list and then set the position of the player object to the corresponding transform’s position in the spawn list. I’ve kept it simple, so that player 1 always spawns in the first location, player 2 in the second location, and so on.

Because of the structure of my player prefab, I am setting the position of the parent not the character object. My player input component is also not on the prefab root object. So your code may look a bit different if your prefab is structured differently.

Toggling Objects on Player Join

If the only camera objects in your scene are part of the player objects that means that players see a black screen until the first player joins. Which is fine for testing, but isn’t exactly polished.

A quick way to fix this is to add a camera to the scene and attach a component that will toggle the camera off when a player joins. You could leave the camera on, but this would make the computer work harder than it needs to as it’s having to do an additional and unseen rendering.

So just like above when controlling the spawn location, we need a new component that has access to the Input System and will subscribe to the OnPlayerJoin action. Then we just need a simple function, subscribed to the action, that will toggle the gameObject off. Couldn’t be simpler.

This of course can be extended and used in as many systems as you need. Play a sound effect, update UI, whatever.

Cinemachine!

If you are using more than one camera with Cinemachine it’s going to take a bit more work. We need to get each virtual camera working with the corresponding Cinemachine Brain. This is done by putting the virtual camera on a specific layer and then setting the camera’s culling mask accordingly.

The first step is to create new layers - one for each possible player. In my case, I’ve set the player limit in the Player Input Manager component to 4 and I’ve created four layers called Player1 through Player4.

To make this easier, or really just a bit less error-prone once set up, I‘ve added a list of layer masks to the Player Manager component. One layer mask for each player that can be added. The value for the layer masks can then be set in the inspector - nice and easy.

Same Add Player Function from above

Then comes the ugly part. Layer masks are bit masks and layers are integers. Ugh. I’m sure there are other ways to do this but our first step is to convert our player layer mask (bitmask) to a layer (integer). So in our Player Manager component and in the Add Player function, we do the conversion with a base 2 logarithm - think powers of 2 and binary.

Next, we need to get references to the camera and virtual camera. In my case the Player Input component (which is what we get a reference to from the OnPlayerJoin action) is not on the parent object, so I first need to get a reference to the parent transform and then search for the CinemachineFreeLook and Camera components in the children. If you are using a different virtual camera you’ll need to search for the type you are using.

Once we have reference to the Cinemachine Virtual Camera component we can set the gameObject layer to the layer integer value we created above.

Go to 9:00 on the video for bitwise operations.

For the camera’s culling mask it’s a bit more work as we don’t want to just set the layer mask we need to add our player layer to the mask. This gets done with the black magic that is bitwise operations. Code Monkey has a pretty decent video explaining some of how this works (go to the 9:00 mark) albeit in a slightly different context.

If everything is set up correctly, we should be able to test our code and have each Cinemachine camera looking at the correct player.

But! You might still see an issue - depending on your camera and how it’s being controlled.

Cinemachine Input Handler

If you are using a Cinemachine Input Handler to control your camera you are likely still seeing all the cameras controlled by one device. This is because the Cinemachine Input Handler is using an Input Action Reference which connects to the Input Action Asset - the scriptable object version - not the instance of the Input Action Asset in the Player Input component. (You’ve got to love the naming…)

To fix this we are going to create our own Input Handler - so we’ll copy and modify the “Get Axis Value” function from the original Cinemachine Input Handler. This function takes in an integer related to an axis and returns a float value from the corresponding action.

Note that this component implements the IInputAxisProvider interface. This is what the Cinemachine virtual camera looks for to get the input.

Replace the Cinemachine Input Handler with this new component and you should be good to go.

(Better) Object Pooling

Why Reinvent?

Object Pooling 1.0

Yes, I’ve made videos on Object Pooling already. Yes, there a written post on it too. Does the internet really need another Object Pooling post. No, not really. But I wanted something a bit slicker and easier to use. I wanted a system with:

  1. No Object Pool manager.

  2. Objects can return themselves without needing to “find” the Object Pool.

  3. Can store a reference to a component, not just the gameObject.

  4. An interface to make the initialization of objects easy and consistent.

Easily the best video on generics I’ve ever made.

So after a lot of staring at my computer and plenty of false starts in the wrong direction, I came up with what I’m calling Object Pool 2.0. It makes use of generics, interfaces, and actions. So it’s not the easiest to understand object pooling solution but it works. It’s clean and easier to use than my past solutions. I like it.

If you’re just here for the code. You can get it on github. But you should definitely skim a bit further to see how it’s implemented.

If you’re asking why not just use the Unity 2021 object pool solution. Well, I’m scared of Unity 2021 (at this point in time) AND I’ve seen some suggestions that it’s not quite ready for prime time. Plus my solution has some features that Unity’s doesn’t and creating an object pooling solution isn’t hard.

Implementation

Maybe this seems backward. Maybe it is? But I think in the case of this object pool solution, it makes sense to show how it’s implemented before going into the gory details. It’s a reasonably abstract solution and that can make it difficult to wrap your head around. So let’s start with how to use it. Then we’ll get to how it works.

First, there needs to be an object that is responsible for spawning the objects. This object owns the pool and needs a reference to the object being pooled - in the case shown I’m using a gameObject prefab. The spawner object then creates an instance of the object pool and sends a reference to the prefab to the pool so it knows what object it is storing and what to create if it runs out and more are being asked for. To get an object from the pool, we simply need to call Pull.

Spawning object with the Object Pool

Just Slap on the Pool Object component and you’re good to go.

The objects being stored also need some logic to work with the pool. The easiest way to attach that logic is to slap on the Pool Object component to the object prefab. This default component will return the object to the object pool when the object is disabled.

Do you see why I like this solution? Now on to the harder part. Maybe even the fun part. Let’s look at how it works.

A Couple Interfaces

To get things started, I created two interfaces. The first one could be useful if I ever have a need to pool non-monobehaviour objects - but is admittedly not 100% necessary at the moment.The second interface, however, is definitely useful and is a big part of this system working smoothly.

But let’s start with the first interface which helps define the object pool. It has just two functions, a push and a pull. It is a generic interface, where the generic parameter is the type of object that will be stored. This works nicely as then our push and pull functions know what types they will be handling

Is this strictly necessary? Probably not.

The second interface is used to define objects that can be in the object pool. When used as intended the object pool can only contain types that implement the IPoolable interface.

This interface has an initialize function that takes in an Action. This action will get set in the object pool and is intended to be the function that returns the object to the pool. This action is then invoked inside of the ReturnToPool function.

If that doesn’t all make sense. Well, that’s reasonable. It can feel a bit circular. Let’s hope that’s not still the case by the time we get finished.

Creating the Pool

Let’s next take a look at the Object Pool definition - or at least the definition I’m using for monobehaviours. The Object pool itself has a generic parameter T and implements the IPool interface. T is then constrained to be a monobehaviour that must also implement the IPoolable interface.

Next, come the variables and properties for the object pool.

First up are two optional actions. These actions can be assigned in a constructor. This allows you to call a function (or multiple functions) EVERY time an object is pulled out of the pool or pushed back to the pool. This could be used to play SFX, increment a score counter or just about anything. It seemed useful so I stuck it in there.

Next is the stack (first in first out collection) that holds all the pooled objects.

Since we know the object being stored is a component we also know it’s attached to a gameObject. It’s this gameObject that will be instantiated if and when the pool runs out of objects in the stack.

Lastly, I added a property to count the number of objects in the pool. I stole this directly from the Unity object pool solution. I haven’t found a use for it yet, but maybe at some point.

Constructors

When we create a pool we need to tell it what object it will store and I think the easiest and best way to do that is to inject the object (prefab) using a constructor. In some cases, it’s also nice to pre-fill the pool. So the first constructor (and easily added to the second) takes in a number of objects to pre-spawn using the Spawn function.

The second constructor takes in the prefab as well as references for the pullObject and pushObject actions.

Push and Pull

The pull function is called whenever an object from the pool is needed.

First, we check if there are objects in the pool, if there are we pop one out. The gameObject is then set to active and the initialize function on the IPoolable object is called. Notice here that we are providing a reference to the Push function. This is the secret sauce.

This push function is the function used to return an object to the pool. This means the spawned object has a reference to this function and can return itself to the pool. We’ll take a closer look at how this happens later.

We then check if the pullObject action was assigned and if it was we invoke it and pass in the object being spawned.

Finally! We return the object so that whatever object asked for it, can have a reference to it.

The push function is pretty simple. It takes in the object and pushes it onto the stack. It then checks if the pushObject action was assigned and invokes it if it was. Lastly, the gameObject is turned off.

As a side note. the turning on and off of the object in the pull and push functions is not 100% needed, but is there to ensure the object is toggled on and off correctly and to help keep the initialize functions clean.

Poolable Objects

Every object that can go into this pool needs to implement the IPoolable interface. Now in some cases, you might want to implement the interface specifically for a given class.

Both as an example of how to implement the interface and also to provide an easy to use and reusable solution I created the PoolObject class. This component can simply be added to any prefab to allow that prefab to work with the object pool.

When implementing the IPoolable interface we should set the generic parameter to the class that is implementing the interface - PoolObject in the example.

The class will also need an action to store a reference to the push function. The value of this action is set in the initialize function - which was called in the Pull function of the object pool.

The ReturnToPool function, in this example, is called in the OnDisable function. This means all we need to do to return the object to the pool is turn the object off! Inside the function, we check if the returnToPool action has a value and if so, we invoke the action and pass in a reference to the object being sent to the object pool.

Overloads

To make the object pool a bit more useful and user friendly I also added several overloads for the Pull function. These allow the position and rotation of the object to be set when pulling it.

I also created functions that return a gameObject, as in some cases this is what is really needed and not the poolable object component.

One More Example

Since actions (and delegates) can be confusing I thought I’d toss in one more example - that of using the second constructor and assigning functions that will be called EVERY time an object is pushed or pulled from the instance of the object pool.

In this example, I’ve added the CallOnPull and CallOnPush functions. Notice that they must have the input type that is being stored in the object pool. Again the idea here is that these functions could trigger an animation, SFX, a UI counter, just about anything.

And that’s it. It’s an abstract solution but actually pretty simple (note that simple is not the same as easy). That’s both why it took a while to create and why I like it.

Designing a New Game - My Process

Some of my projects….

This is my process. It’s been refined over 8+ years of tinkering with Unity, 2 game jams, and 2 games published to Steam.

My goal with this post is just to share. Share what I’ve learned and share how I am designing my next project. My goal is not to suggest that I’ve found the golden ticket. Cause I haven’t. I’m pretty sure the perfect design process does not exist.

So these are my thoughts. These are the questions I ask myself as I stumble along in the process of designing a project. Maybe this post will be helpful. Maybe it won’t. If it feels long-winded. It probably is.

I’ve tried just opening Unity and designing as I go. It didn’t work out well. So again, this is just me sharing.

TL;DR

  • Set a Goal - To Learn? For fun? To sell?

  • Play games as research - Play small games and take notes.

  • Prototypes system - What don’t you know how to build? Is X or Y actually doable or fun?

  • Culling - What takes too long? What’s too hard? What is too complicated?

  • Plan - Do the hard work and plan the game. Big and small mechanics. Art. Major systems.

  • Minimal Viable Product - Not the game just the basics. Is it fun? How long did it take?

  • Build it! - The hardest part. Also the most rewarding.

What Is The Goal?

When starting a new project, I first think about the goal for the project. For me, this is THE key step in designing a project - which is a necessary step to the holy grail of actually FINISHING a project. EVERY other step and decision in the process should reflect back on the goal or should be seen through the lens of that goal. If the design choice doesn’t help to reach the goal, then I need to make a different decision.

Am I making a game to share with friends? Am I creating a tech demo to learn a process or technique? Am I wanting to add to my portfolio of work? What is the time frame? Weeks? Months? Maybe a year or two (scary)?

I want another title in this list!

For this next project, I want to add another game to the OWS Steam library and I’d like to generate some income in the process. I have no dreams of creating the next big hit, but if I could sell 1000 or 10,000 copies - that would be awesome.

I also want to do it somewhat quickly. Ideally, I could have the project done in 6 to 9 months, but 12 to 18 months is more likely with the time I can realistically devote to the project. One thing I do know, is that whatever amount of time I think it’ll take. It’ll likely take double.

Research!

After setting a goal, the next step is research. Always research. And yes. I mean playing games! I look for games that are of a similar scope to what I think I can make. Little games. Games with interesting or unique mechanics. Games made by individuals or MAYBE a team of 2 or 3. As I play I ask myself questions:

What elements do I find fun? What aspects do I not enjoy? Do I want to keep playing? What is making me want to quit? What mechanics or ideas can I steal? What systems do I know or not know how to make? Which systems are complex? What might be easy to add?

Then there are three more questions. These are key and crucial in designing a game and can help to keep the game scope (somewhat) in check. Which in turn is necessary if a game is going to get finished

How did a game developer’s clever design decisions simplify the design? How does a game make something fun without being complex? Why might the developer have made decisions X or Y? What problems did that decision avoid?

These last questions are tough and often have subtle answers. They take thought and intention. Often while designing a game my mind goes towards complexity. Making things bigger and more detailed! Can’t solve problem A? Well, lets bolt-on solution B!

For example, I’ve wanted to make a game where the player can create or build the world. Why not let the player shape the landscape? Add mountains and rivers? Place buildings? Harvest resources? It would be so cool! Right? But it’s a huge time sink. Even worse, it’s complex and could easily be a huge source of bugs.

So a clever solution? I like how I’m calling myself clever. Hex tiles. Yes. Hex tiles. Let the player build the world, but do it on a grid with prefabs. Bam! Same result. Same mechanic. Much simpler solution. It trades a pile of complex code for time spent in Blender designing tiles. Both Zen World and Dorf Romantic are great examples of allowing the player to create the world and doing so without undue complexity.

Navigation can be another tough nut to crack. Issues and bugs pop up all over the place. Units running into each other. Different movement costs. Obstacles. How about navigation in a procedural landscape? Not to mention performance can be an issue with a large number of units.

My “Research” List

Creeper World 4 gets around this in such a simple and elegant way. Have all the units fly in straight lines. Hover. Move. Land. Done.

I am a big believer that constraints can foster creativity. For me, identifying what I can’t do is more important than identifying what I can do.

When I was building Fracture the Flag I wanted the players to be able to claim territory. At first, I wanted to break the map up into regions - something like the Risk map. I struggled with it for a while. One dead end after another. I couldn’t figure out a good solution.

Then I asked, why define the regions? Instead, let the players place flags around the map to claim territory! If a flag gets knocked down the player loses that territory. Want to know if a player can build at position X or Y? They can if it’s close to a flag. So many problems solved. So much simpler and frankly so much more fun.

With research comes a flood of ideas. And it’s crucial to write them down. Grab a notebook. Open a google doc. Or as I recently discovered Google Keep - it’s super lightweight and easy to access on mobile for those ah-ha moments.

I keep track of big picture game ideas as well as smaller mechanics that I find interesting. I don’t limit myself to one idea or things that might nicely fit together. This is the throwing spaghetti at the wall stage of design. I’m throwing it out there and seeing what sticks. Even if, maybe especially if, I get excited about one idea I force myself to think beyond it and come up with multiple concepts and ideas. This is not the time to hyper focus.

At this stage, I also have to bring in a dose of reality. I’m not making an MMO or the next E-Sports tile. I’m dreaming big, but also trying not to waste my time with completely unrealistic dreams. I should probably know how to make at least 70, 80 or maybe 90 percent of the game!

While you’re playing games as “research” support small developers and leave them reviews! Use those reviews to process what you like and what you don’t like. What would you change? What would you keep? What feels good? What would feel better? Those reviews are so crucial to a developer. Yes, even negative ones are helpful.

Prototype Systems - Not The Game

At this point in the process, I get to start scratching the itch to build. Up until now, Unity hasn’t been opened. I’ve had to fight the urge, but it’s been for the best. Until now.

Now I get to prototype systems. Not a game or the game. Just parts of a potential game. This is when I start to explore systems that I haven’t made before or systems I don’t know how to make. I focus on parts that seem tricky or will be core to the game. I want to figure out the viability of an idea or concept.

At this stage, I dive into different research. Not playing games, but watching and reading tutorials and articles. I take notes. Lots of notes. For me, this is like going back to school. I need to learn how other people have created systems or mechanics. Why re-invent the wheel? Sometimes you need to roll your own solution, but why not at least see how other folks have done it first?

If I find a tutorial that feels too complex. I look for another. If that still feels wrong, I start to question the mechanic itself.

Maybe it’s beyond my skill set? Maybe it’s too complex for a guy doing this in his spare time? Or maybe I just need to slow down and read more carefully?

Some prototype Art for a possible Hex tile Game

Understanding and implementing a hex tile system was very much all of the above. Red Blob Games has an excellent guide to hex grids with all the math and examples of code to implement hex grids into your games. It’s not easy. Not even close. But it was fun to learn and with a healthy dose of effort, it’s understandable. (To help cement my understanding, I may do a series of videos on hex grids.)

This stage is also a chance to evaluate systems to see if they could be the basis of a game. I’ve been intrigued by ecosystems and evolution for a long while. Equilinox is a great example of a fairly recent ecosystem-based game made by a single (skilled) individual. Sebastian Lague put together an interesting video on evolution, which was inspired by the Primer videos. All of these made me want to explore the underlying mechanics.

So, I spent a day or two writing code, testing mechanics, and had some fun but ultimately decided it was too fiddly and too hard to base a game on. So I moved on, but it wasn’t a waste of time!

After each prototype is functional, but not polished, I ask myself more questions.

Does the system work? Is the system janky? What parts are missing or still need to be created? Is it too complex or hard to balance? Is there too much content to create? Or maybe it’s just crap?

For me, it’s also important that I’m not trying to integrate different system prototypes (at this point). Not yet. I for sure want to avoid coupling and keep things encapsulated, but I also don’t want to go down a giant rabbit hole. That time may come, but it’s not now. I’m also not trying to polish the prototypes. I want the systems to work and be reasonably robust, but at this point, I don’t even know if the systems will be in a game so I don’t want to waste time.

(Pre-Planning) Let The Culling Begin!

With prototypes of systems built, it’s now time to start chopping out the fluff, the junk, and start to give some shape to a game design. And yes, I start asking more questions.

What are the major systems of the game? What systems are easy or hard to make? Are there still systems I don’t know how to make? What do I still need to learn? What will be the singular core mechanic of the game?

And here’s a crucial question!

What are the time sinks? Even if I know how to do X or Y will it take too long?

3D Models, UI, art, animations, quests, stories, multiplayer, AI…. Basically, everything is a time sink. But!

Which ones play to my strengths? Which ones help me reach my goal? Which ones can I design around or ignore completely? What time sinks can be tossed out and still have a fun game?

Assets I Use

When I start asking these questions it’s easy to fall into the trap of using 3rd party assets to solve my design problems or fill in my lack of knowledge. It’s easy to use too many or use the wrong ones. I need to be very picky about what I use. Doubly so with assets that are used at runtime (as opposed to editor tools). For me, assets need to work out of the box AND work independently. If my 3rd party inventory system needs to talk to my 3rd party quest system which needs to talk to my 3rd party dialogue system I am asking for trouble and I will likely find it.

The asset store is full of shiny objects and rat holes. It’s worth a lot of time to think about what you really need from the asset store.

What you can create on your own? What should you NOT create on your own? What you can design around? Do you really need X or Y?

For me, simple is almost always better. If I do use 3rd party assets, and I do, they need to be part of the prototyping stage. I read the documentation and try to answer as many questions as I can before integrating the asset into my project. If the asset can’t do what I need, then I may have to make hard decisions amount the asset, my design, or even the game as a whole.

I constantly have to remind myself that games aren’t fun because they’re complex. Or at the very least, complexity does not equal fun. What makes games fun is something far more subtle. Complexity is a rat hole. A shiny object.

Deep Breath. Pause. Think.

At this point, I have a rough sketch in my head of the game and it’s easy to get excited and jump into building with both feet. But! I need to stop. Breathe. And think.

Does the game match my goals? Can I actually make the game? Are there mechanics that should be thrown out? Can I simplify the game and still reach my goal? Is this idea truly viable?

Depending on the answers, I might need be to go back and prototype, do more research, or scrap the entire design and start with something a single guy can actually make.

This point is a tipping point. I can slow down and potentially re-design the game or spend the next 6 months discovering my mistakes. Or worse, ignoring my mistakes and wasting even more time as I stick my head in the sand and insist I can build the game. I’ve been there. I’ve done that. And it wasn’t fun.

Now We Plan

Maybe a third of the items on my to do list for Grub Gauntlet

Ha! I bet you thought I was done planning. Not even close. I haven’t even really started.

There are a lot of opinions about the best planning tool. For me, I like Notion. Others like Milanote or just a simple google doc. The tool doesn’t matters, it’s the process. So pick what works for you and don’t spend too much time trying to find the “best” tool. There’s a poop ton of work to do, don’t waste time.

Finding the right level of detail in planning is tough and definitely not a waste of time. I’m not creating some 100+ page Game Design Document. Rather I think of what I'm creating as a to-do list. Big tasks. Small tasks. Medium tasks. I want to plan out all the major systems, all the art, and all the content. This is my chance to think through the game as a whole before sinking 100’s or likely 1000’s of hours into the project.

To some extent, the resulting document forms a contract with myself and helps prevent feature creep. The plan also helps when I’m tired or don’t know what to do next. I can pull up my list and tackle something small or something interesting.

Somewhere in the planning process, I need to decide on a theme or skin for the game. The naming of classes or objects may depend on the theme AND more importantly, some of the mechanics may be easier or harder to implement depending on the theme. For example, Creeper World 4’s flying tanks totally work in the sci-fi-themed world. Not so much if they were flying catapults or swordsmen in a fantasy world. Need to resupply units? Creeper World sends the resources over power lines. Again, way easier than an animated 3D model of a worker using a navigation system to run from point A to point B and back again.

Does the theme match the mechanics? Does it match my skillset? Can I make that style of art? Does the theme help reach the goal? Does the theme simplify mechanics or make them more complex?

Minimum Viable Product (MVP)

Upgrade that

Knowlegde

Finally! Now I get to start building the project structure, writing code, and bringing in some art. But! I’m still not building the game. I’m still testing. I want to get something playable as fast as possible. I need to answer the questions:

Is the game fun? Have I over-scoped the game? Can I actually build it with my current skills and available time?

If I spent 3 months working on an inventory system and all I can do is collect bits on a terrain and sell them to a store. I’ve over-scoped the game. If the game is tedious and not fun then I either need to scrap the game or dig deeper into the design and try to fix it. If the game breaks every time I add something or change a system then I need to rethink the architecture or maybe the scope of the game or upgrade my programming knowledge and skill set.

If I can create the MVP in less than a month and it’s fun then I’m on to something good!

Why so short a time frame? My last project, Grub Gauntlet was created during a 48-hour game jam. I spent roughly 20 hours during that time to essentially create an MVP. It then took another 10 months to release! I figure the MVP is somewhere around 1/10th or 1/20th of the total build time.

It’s way better to lose 1-2 months building, testing, and then decide to scrap the project than to spend 1-2 years building a pile of crap. Or worse! Spend years working only to give up without a finished product.

Can I Build It Now?

This is the part we’re all excited about. Now I get to build, polish, and finish a game. There’s no secret sauce. This part is the hardest. It’s the longest. It’s the most discouraging. It’s also the most rewarding.

If I’ve done my work ahead of time then I should be able to finish my project. And that? That is an amazing feeling!

Strategy Game Camera: Unity's New Input System

I was working on a prototype for a potential new project and I needed a camera controller. I was also using Unity’s “new” input system. And I thought, hey, that could be a good tutorial…

There’s also a written post on the New Input System. Check the navigation to the right.

The goal here is to build a camera controller that could be used in a wide variety of strategy games. And to do it using Unity’s “New” Input System.

The camera controller will include:

  • Horizontal motion

  • Rotation

  • Zoom/elevate mechanic

  • Dragging the world with the mouse

  • Moving when the mouse is near the screen edge

Since I’ll be using the New Input System, you’ll want to be familiar with that before diving too deep into this camera controller. Check either the video or the written blog post.

If you’re just here for the code or want to copy and paste, you can get the code along with the Input Action Asset on GitHub.

Build the Rig

Camera rig Hierarchy

The first step to getting the camera working is to build the camera rig. For my purposes, I choose to keep it simple with an empty base object that will translate and rotate in the horizontal plane plus a child camera object that will move vertically while also zooming in and out.

I’d also recommend adding in something like a sphere or cube (remove its collider) at the same position as the empty base object. This gives us an idea of what the camera can see and how and where to position the camera object. It’s just easy debugging and once you’re happy with the camera you can delete the extra object.

Camera object transform settings

For my setup, my base object is positioned on the origin with no rotation or scaling. I’ve placed the camera object at (0, 8.3, -8.8) with no rotation (we’ll have the camera “look at” the target in the code).

For your project, you’ll want to play with the location to help tune the feel of your camera.

Input Settings

Input Action Asset for the Camera Controller

For the camera controller, I used a mix of events and directly polling inputs. Sometimes one is easier to use than another. For many of these inputs, I defined them in an Input Action Asset. For some mouse events, I simply polled the buttons directly. If that doesn’t make sense hopefully it will.

In the Input Action Asset, I created an action map for the camera and three actions - movement, rotate, and elevate. For the movement action I created two bindings to allow the WASD keys and arrows keys to be used. It’s easy, so why not? Also important, both rotate and elevate have their action type set to Vector2.

Importantly the rotate action is using the delta of the mouse position not the actual position. This allows for smooth movement and avoids the camera snapping around in a weird way.

We’ll be making use of the C# events. So make sure to save or have auto-save enabled. We also need to generate the C# code. To do this select the Input Action Asset in your project folders and then in the inspector click the “generate C# class” toggle and press apply.

Variables and More Variables!

Next, we need to create a camera controller script and attach it to the base object of our camera rig. Then inside of a camera controller class we need to create our variables. And there’s a poop ton of them.

The first two variables will be used to cache references for use with the input system.

The camera transform variable will cache a reference to the transform with the camera object - as opposed to the empty object that this class will be attached to.

All of the variables with the BoxGroup attribute will be used to tune the motion of the camera. Rather than going through them one by one… I’m hoping the name of the group and the name of the variable clarifies their approximate purpose.

The camera settings I’m using

The last four variables are all used to track various values between functions. Meaning one function might change a value and a second function will make use of that value. None of these need to have their value set outside of the class.

A couple of other bits: Notice that I’ve also added the UnityEngine.InputSystem namespace. Also, I’m using Odin Inspector to make my inspector a bit prettier and keep it organized. If you don’t have Odin, you should, but you can just delete or ignore the BoxGroup attributes.

Horizontal Motion

I’m going to try and build the controller in chunks with each chunk adding a new mechanic or piece of functionality. This also (roughly) means you can add or not add any of the chunks and the camera controller won’t break.

The first chunk is horizontal motion. It’s also the piece that takes the most setup… So bear with me.

First, we need to set up our Awake, OnEnable, and OnDisable functions.

In the Awake function, we need to create an instance of our CameraControls input action asset. While we’re at it we can also grab a reference to the transform of our camera object.

In the OnEnable function, we first need to make sure our camera is looking in the correct direction - we can do this with the LookAt function directed towards the camera rig base object (the same object the code is attached to).

Then we can save the current position to our last position variable - this value will get used to help create smooth motion.

Next, we’ll cache a reference to our MoveCamera action - we’ll be directly polling the values for movement. We also need to call Enable on the Camera action map.

In OnDisable we’ll call Disable on the camera action map to avoid issues and errors in case this object or component gets turned off.

Helper functions to get camera relative directions

Next, we need to create two helper functions. These will return camera relative directions. In particular, we’ll be getting the forward and right directions. These are all we’ll need since the camera rig base will only move in the horizontal plane, we’ll also squash the y value of these vectors to zero for the same reason.

Kind of yucky. But gets the job done.

Admittedly I don’t love the next function. It feels a bit clumsy, but since I’m not using a rigidbody and I want the camera to smoothly speed up and slow down I need a way to calculate and track the velocity (in the horizontal plane). So thus the Update Velocity function.

Nothing too special in the function other than once again squashing the y dimension of the velocity to zero. After calculating the velocity we update the value of the last position for the next frame. This ensures we are calculating the velocity for the frame and not from the start.

The next function is the poorly named Get Keyboard Movement function. This function polls the Camera Movement action to then set the target position.

In order to translate the input into the motion we want we need to be a bit careful. We’ll take the x component of the input and multiply it by the Camera Right function and add that to the y component of the input multiplied by the Camera Forward function. This ensures that the movement is in the horizontal plane and relative to the camera.

We then normalize the resulting vector to keep a uniform length so that the speed will be constant even if multiple keys are pressed (up and right for example).

The last step is to check if the input value’s square magnitude is above a threshold, if it is we add our input value to our target position.

Note that we are NOT moving the object here since eventually there will be multiple ways to move the camera base, we are instead adding the input to a target position vector and our NEXT function will use this target position to actually move the camera base.

If we were okay with herky-jerky movement the next function would be much simpler. If we were using the physics engine (rigidbody) to move the camera it would also be simpler. But I want smooth motion AND I don’t want to tune a rigidbody. So to create smooth ramping up and down of speed we need to do some work. This work will all happen in the Update Base Position function.

First, we’ll check if the square magnitude of the target position is greater than a threshold value. If it is this means the player is trying to get the camera to move. If that’s the case we’ll lerp our current speed up to the max speed. Note that we’re also multiplying Time Delta Time by our acceleration. The acceleration allows us to tune how quickly our camera gets up to speed.

The use of the threshold value is for two reasons. One so we aren’t comparing a float to zero, i.e. asking if a float equals zero can be problematic. Two, if we were using a game controller joystick even if it’s at rest the input value may not be zero.

Testing the Code so far - Smooth Horizontal Motion

We then add to the transform’s position an amount equal to the target position multiplied by the current camera speed and time delta time.

While they might look different these two lines of code are closely related to the Kinematic equations you may have learned in high school physics.

If the player is not trying to get the camera to move we want the camera to smoothly come to a stop. To do this we want to lerp our horizontal velocity (calculated constantly by the previous function) down to zero. Note rather than using our acceleration to control the rate of the slow down, I’ve used a different variable (damping) to allow separate control.

With the horizontal velocity lerping it’s way towards zero, we then add to the transform’s position a value equal to the horizontal velocity multiplied by time delta time.

The final step is to set the target position to zero to reset for the next frame’s input.

Our last step before we can test our code is to add our last three functions into the update function.

Camera Rotation

Okay. The hardest parts are over. Now we can add functionality reasonably quickly!

So let’s add the ability to rotate the camera. The rotation will be based on the delta or change in the mouse position and will only occur when the middle mouse button is pressed.

We’ll be using an event to trigger our rotation, so our first addition to our code is in our OnEnable and OnDisable functions. Here we’ll subscribe and unsubscribe the (soon to be created) Rotate Camera function to the performed event for the rotate camera action.

If you’re new to the input system, you’ll notice that the Rotate Camera function takes in a Callback Context object. This contains all the information about the action.

Rotating the camera should now be a thing!

Inside the function, we’ll first check if the middle mouse button is pressed. This ensures that the rotation doesn’t occur constantly but only when the button is pressed. For readability more than functionality, we’ll store the x value of the mouse delta and use it in the next line of code.

The last piece is to set the rotation of the transform (base object) and only on the y-axis. This is done using the x value of the mouse delta multiplied by the max rotation speed all added to the current y rotation.

And that’s it. With the event getting invoked there’s no need to add the function to our update function. Nice and easy.

Vertical Camera Motion

With horizontal and rotational motion working it would be nice to move the camera up and down to let the player see more or less of the world. For controlling the “zooming” we’ll be using the mouse scroll wheel.

This motion, I found to be one of the more complicated as there were several bits I wanted to include. I wanted there to be a min and max height for the camera - this keeps the player from zooming too far out or zooming down to nothingness - also while going up and down it feels a bit more natural if the camera gets closer or farther away from what it’s looking at.

This zoom motion is another good use of events so we need need to make a couple of additions to the OnEnable and OnDisable. Just like we did with the rotation we need to subscribe and unsubscribe to the performed event for the zoom camera action. We also need to set the value of zoom height equal to the local y position of the camera - this gives an initial value and prevents the camera from doing wacky things.

Then inside the Zoom Camera function, we’ll cache a reference to the y component of the scroll wheel input and divide by 100 - this scales the value to something more useful (in my opinion).

If the absolute value of the input value is greater than a threshold, meaning the player has moved the scroll wheel, we’ll set the zoom height to the local y position plus the input value multiplied by the step size. We then compare the predicted height to the min and max height. If the target height is outside of the allowed limits we set our height to the min or max height respectively.

Once again this function isn’t doing the actual moving it’s just setting a target of sorts. The Update Camera Position function will do the actual moving of the camera.

The first step to move the camera is to use the value of the zoom height variable to create a Vector3 target for the camera to move towards.

Zooming in action

The next line is admittedly a bit confusing and is my attempt to create a zoom forward/backward motion while going up and down. Here we subtract a vector from our target location. The subtracted vector is a product of our zoom speed and the difference between the current height and the target height All of which is multiplied by the vector (0, 0, 1). This creates a vector proportional to how much we are moving vertically, but in the camera’s local forward/backward direction.

Our last steps are to lerp the camera’s position from its current position to the target location. We use our zoom damping variable to control the speed of the lerp.

Finally, we also have the camera look at the base to ensure we are still looking in the correct direction.

Before our zoom will work we need to add both functions to our update function.

If you are having weird zooming behavior it’s worth double-checking the initial position of the camera object. My values are shown at the top of the page. In my testing if the x position is not zero, some odd twisting motion occurs.

Mouse at Screen Edges

At this point, we have a pretty functional camera, but there’s still a bit more polish we can add. Many games allow the player to move the camera when the mouse is near the edges of the screen. Personally, I like this when playing games, but I do find it frustrating when working in Unity as the “screen edges” are defined by the game view…

To create this motion with the mouse all we need to do is check if the mouse is near the edge of the screen.

We do this by using Mouse.current.position.ReadValue(). This is very similar to the “old” input system where we could just call Input.MousePosition.

We also need a vector to track the motion that should occur - this allows the mouse to be in the corner and have the camera move in a diagonal direction.

Screen edge motion

Next, we simply check if the mouse x and y positions are less than or great than threshold values. The edge tolerance variable allows fine tuning of how close to the edge the cursor needs to be - in my case I’m using 0.05.

The mouse position is given to us in pixels not in screenspace coordinates so it’s important that we multiply by the screen width and height respectively. Notice that we are again making use of the GetCameraRight and GetCameraForward functions.

The last step inside the function is to add our move direction vector to the target position.

Since we are not using events this function also needs to get added to our update function.

Dragging the World

I stole and adapted the drag functionality from Game Dev Guide.

The last piece of polish I’m adding is the ability to click and drag the world. This makes for very fast motion and generally feels good. However, a note of caution when implementing this. Since we are using a mouse button to drag this can quickly interfere with other player actions such as placing units or buildings. For this reason, I’ve chosen to use the right mouse button for dragging. If you want to use the left mouse button you’ll need to check if you CAN or SHOULD drag - i.e. are you placing an object or doing something else with your left mouse button. In the past I have used a drag handler… so maybe that’s a better route, but it’s not the direction I choose to go at this point.

I should also admit that I stole and adapted much of the dragging code from a Game Dev Guide video which used the old input system.

Since dragging is an every frame type of thing, I’m once again going to directly poll to determine whether the right mouse button is down and to get the current position of the mouse…

This could probably be down with events, but that seems contrived and I’m not sure I really see the benefit. Maybe I’m wrong.

Inside the Drag Camera function, we can first check if the right button is pressed. If it’s not we don’t want to go any further.

If the button is pressed, we’re going to create a plane (I learned about this in the Game Dev Guide video) and a ray from the camera to the mouse cursor. The plane is aligned with the world XZ plane and is facing upward. When creating the plane the first parameter defines the normal and the second defines a point on the plane - which for the non-math nerds is all you need.

Next, we’ll raycast to the plane. So cool. I totally didn’t know this was a thing!

The out variable of distance tells us how far the ray went before it hit the plane, assuming it hit the plane. If it did hit the plane we’re going to do two different things - depending on whether we just started dragging or if we are continuing to drag.

Dragging the world

If the right mouse button was pressed this frame (learned about this thanks to a YouTube comment) we’ll cache the point on the plane that we hit. And we get that point, by using the Get Point function on our ray.

If the right mouse button wasn’t pressed this frame, meaning we are actively dragging, we can update the target position variable with the vector from where dragging started to where it currently is.

The final step is to add the drag function to our update function.

That’s It!

There you go. The basics of a strategy camera for Unity using the New Input System. Hopefully, this gives you a jumping off point to refine and maybe add features to your own camera controller.

Raycasting - It's mighty useful

Converting the examples to use the new input system. Please check the pinned comment on YouTube for some error correction.

What is Raycasting?

Raycasting is a lightweight and performant way to reach out into a scene and see what objects are in a given direction. You can think of it as something like a long stick used to poke and prod around a scene. When something is found, we can get all kinds of info about that object and have access

So… It’s pretty useful and a tool you should have in your game development toolbox.

Three Important Bits

The examples here are all going to be 3D, if you are working on a 2D project the ideas and concepts are nearly identical - with the biggest difference being that the code implementation is a tad different.

It’s also worth noting that the code for all the raycasting in the following examples, except for the jumping example, can be put on any object in the scene, whether that is the player or maybe some form of manager.

The final and really important tidbit is that raycasting is part of the physics engine. This means that for raycasting to hit or find an object, that object needs to have a collider or a trigger on it. I can’t tell you how many hours I’ve spent trying to debug raycasting only to find I forgot to put a collider on an object.

But First! The Basics.

The basics Raycast function

We need to look at the Raycast function itself. The function has a ton of overloads which can be pretty confusing when you’re first getting started.

That said using the function basically breaks down into 5 pieces of information - the first two of which are required in all versions of the function. Those pieces of information are:

  1. A start position.

  2. The direction to send the ray.

  3. A RaycastHit, which contains all the information about the object that was hit.

  4. How far to send the ray.

  5. Which layers can be hit by the raycast.

It’s a lot, but not too bad.

Defining a ray with start positon and the direction (both Vector3)

Raycast using a Ray

Unity does allow us to simplify the input para, just a bit, with the use of a ray. A ray essentially stores the start position and the direction in one container allowing us to reduce the number of input parameters for the raycast function by one.

Notice that we are defining the RaycastHit inline with the use of the keyword out. This effectively creates a local variable with fewer lines of code.


Ok Now Onto Shooting

Creating a ray from the camera through the center of the screen

To apply this to first-person shooting, we need a ray that starts at the camera and goes in the camera’s forward direction.

Then since the raycast function returns a boolean, true if it hits something, false if it didn’t, we can wrap the raycast in an if statement.

In this case, we could forgo the distance, but I’ll set it to something reasonable. I will, however, skip the layer mask as I want to be able to shoot at everything in the scene so the layer mask isn’t needed.

When I do hit something I want some player feedback so I’ll instantiate a prefab at the hit point. In my case, the prefab has a particle system, a light, and an audio source just to make shooting a bit more fun.

Okay, but what if we want to do something different when we hit a particular type of target?

There are several ways to do this, the way I chose was to add a script to the target (purple sphere) that has a public “GetShot” function. This function takes in the direction from the ray and then applies a force in that direction plus a little upward force to add some extra juice.

Complete first person shooting example

The unparenting at the end of the GetShot function is to avoid any scaling issues as the spheres are parented to the cubes below them.

Then back to the raycast, we can check if the object we hit has a “Target” component on it. If it does, we call the “GetShot” function and pass in the direction from the ray.

The function getting called could of course be on a player or NPC script and do damage or any other number of things needed for your game.

The RaycastHit gives us access to the object hit and thus all the components on that object so we can do just about anything we need.

But! We still need some way to trigger this raycast and we can do that by wrapping it all in another if statement that checks if the left mouse button was pressed. And all of that can go into our update function so we check every frame.



Selecting Objects

Another common task in games is to click on objects with a mouse and have the object react in some way. As a simple example, we can click on an object to change its color and then have it go back to its original color when we let go of the mouse button.

To do this, We’ll need two extra variables to hold references to a mesh renderer as well as the color of the material on that mesh renderer.

For this example, I am going to use a layer mask. To make use of the layer mask, I’ve created a new layer called “selectable” and changed the layer of all the cubes and spheres in the scene, and left the rest of the objects on the default layer. This will prevent us from clicking on the background and changing its color.

Complete code for Toggling objects color

Then in the script, I created a private serialized field of the type layer mask. Flipping back into Unity the value of the layer mask can be set to “selectable.”

Then if and else if statements check for the left mouse button being pressed and released, respectively.

If the button is pressed we’ll need to raycast and in this case, we need to create a ray from the camera to the mouse position.

Thankfully Unity has given us a nice built function that does this for us!

With our ray created we can add our raycast function, using the created ray, a RaycastHit, a reasonable distance, and our layer mask.

If we hit an object on our selectable layer, we can cache the mesh renderer and the color of the first material. The caching is so when we release the mouse button we can restore the color to the correct material on the correct mesh renderer.

Not too bad.

Notice that I’ve also added the function Debug.DrawLine. When getting started with raycasting it is SUPER easy to get rays going in the wrong direction or maybe not going far enough.

The DrawLine function does just as it says drawing a line from one point to another. There is also a duration parameter, which is how long the line is drawn in seconds which can be particularly helpful when raycasting is only done for one frame at time.






Moving Objects

Now at first glance moving objects seems very similar to selecting objects - raycast to the object and move the object to the hit point. I’ve done this a lot…

The problem is the object comes screaming towards the camera, because the hit point is closer to the camera than the objects center. Probably not what you or your players want to happen.

Don’t do this!!

One way around this is to use one raycast to select the object and a second raycast to move the object. Each raycast will use a different layer mask to avoid the flying cube problem.

I’ve added a “ground” layer to the project and assigned it to the plane in the scene. The “selectable” layer is assigned to all the cubes and spheres. The values for the layer masks can again be set in the inspector.

To make this all work, we’re also going to need variables to keep track of the selected object (Transform) and the last point hit by the raycast (Vector3).

To get our selected object, we’ll first check if the left mouse button has been clicked and if the selected object is currently null. If both are true, we’ll use a raycast just like the last example to store a reference to the transform of the object we clicked on.

Note the use of the “object” layer mask in the raycast function.

Our second raycast happens when the left mouse button is held down AND the selected object is NOT null. Just like the first raycast this one goes from the camera to the mouse, but it makes use of the second layer mask, which allows the ray to go through the selected object and hit the ground.

We now move the selected object to the point hit by the ray cast, plus for just for fun, we move it up a bit as well. This lets us drag the object around.

If we left it like this and let go of the mouse button the object would stay levitated above the ground. So instead, when the mouse button comes up we can set the position to the last point hit by the raycast as well as setting the selectedObject variable to null - allowing us to select a new object.


Jumping

The last example I want to go over in any depth is jumping, which can be easily extended to other platforming needs like detecting a wall or a slope or the edge of a platform - I’d strongly suggest checking out Sebastian Lague’s series on creating a 2D platformer if you want to see raycasting put to serious use not mention a pretty good character controller for a 2D game!

For this example, I’ve created a variable to store the rigidbody and I’ve cached a reference to that rigidbody in the start function.

For basic jumping, generally, the player needs to be on the ground in order to jump. You could use a trigger combined with OnTriggerEnter and OnTriggerExit to track if the player is touching the ground, but that’s clumsy and has limitations.

Instead, we can do a simple short raycast directly down from the player object to check and see if we’re near the ground. Once again this makes use of layer mask and in this case only casts to the ground layer.

Full code for jumping

I’ve wrapped the raycast into a separate function that returns the boolean from the raycast. The ray itself goes from the center of the player character in the down direction. The raycast distance is set to 1.1 since the player object (a capsule) is 2 meters high and I want the raycast to extend just beyond the object. If the raycast extends too far, the ground can be detected when the player is off the ground and the player will be able to jump while in the air.

I’ve also added in a Debug.DrawLine function to be able to double-check that the ray is in the correct place and reaching outside the player object.

Then in the update function, we check if the spacebar is pressed along with whether the player is on the ground. If both are true we apply force to the rigidbody and it the the player jumps.




RaycastHit

The real star of the raycasting show is the RaycastHit variable.

It’s how we get a handle on the object the raycast found and there’s a decent amount of information that it can give us. In all the examples above we made use of “point” to get the exact coordinates of the hit. For me this is what I’m using 9 times out of 10 or even more when I raycast.

We can also get access to the normal of the surface we hit, which among other things could be useful if you want something to ricochet off a surface or if you want to have a placed object sit flat on a surface.

The RaycastHit can also return the distance from the ray’s origin to the hit point as well as the rigidbody that was hit (if there was one).

If you want to get really fancy you can also access bits about the geometry and the textures at the hit point.


Other Things Worth Knowing

So there’s 4 examples of common uses of raycasting, but there are a few other bits of info that could be good to know too.

There is an additional input for raycasting which is Physics.queriesHitTriggers. Be default this parameter is true and if its true raycasts will hit triggers. If it’s false the raycast will skip triggers. This could be helpful for raycasting to NPCs that have a collider on their body, but also have a larger trigger surrounding them to detect nearby objects.

Next useful bit. If you don’t set a distance for a raycast, Unity will default to an infinite distance - whatever infinity means to a computer… There could be several reasons not to allow the ray to go to infinity - the jump example is one of those.

A very non precise or accurate way of measures performance

Raycasting can get a bad rap for performance. The truth is it’s pretty lightweight.

I created a simple example that raycasts between 1 and 1000 times per frame. In an empty scene on my computer with 1 raycast I saw over 5000 fps. With a 1000 raycasts per FRAME I saw 800 fps. More importantly, but no more precisely measured, the main thread only took a 1.0 ms hit when going from 1 raycast to 1000 raycasts which isn’t insignificant, but it’s also not game-breaking. So if you are doing 10 or 20 raycasts or even 100 raycasts per frame it’s probably not something you need to worry about.

1 Raycast per Frame

1000 Raycasts per Frame

Also worth knowing about, is the RaycastAll function. Which will return all objects the ray intersects, not just the first object. Definitely useful in the right situation.

Lastly, there are other types of “casting” not just raycasting. There is line casting, box casting, and sphere casting. All of which use their respective geometric shape and check for colliders and triggers in their path. Again useful in the right situation - but beyond the scope of this tutorial.

Cinemachine. If you’re not. You should.

So full disclosure! This isn’t intended to be the easy one-off tutorial showing you how to make a particular thing. I want to get there, but this isn’t it. Instead, this is an intro. An overview.

If you’re looking for “How do I make an MMO RPG RTS 2nd Person Camera” this isn’t the tutorial for you. But! I learned a ton while researching Cinemachine (i.e. reading the documentation and experimenting) and I figured if I learned a ton then it might be worth sharing. Maybe I’m right. Maybe I’m not.

Cinemachine. What is it? What does it do?

Cinemachine setup in the a Unity scene

Cinemachine is a Unity asset that quickly and easily creates high-functioning camera controllers without the need (but with the option) to write custom code. In just a matter of minutes, you can add Cinemachine to your project, drop in the needed prefabs and components and you’ll have a functioning 2D or 3D camera!

It really is that simple.

But!

If you’re like me you may have just fumbled your way through using Cinemachine and never really dug into what it can do, how it works, or the real capabilities of the asset. This leaves a lot of potential functionality undiscovered and unused.

Like I said above, this tutorial is going to be a bit different, many other tutorials cover the flashy bits or just a particular camera type, this post will attempt to be a brief overview of all the features that Cinemachine has to offer. Future posts will take a look at more specific use cases such as cameras for a 2D platformer, 3rd person games, or functionality useful for cutscenes and trailers.

If there’s a particular camera type, game type, or functionality you’d like to see leave a comment down below.

How do you get Cinemachine?

Cinemachine in the PAckage Manager

Cinemachine used to be a paid asset on the asset store and as I remember it, it was one of the first assets that Unity purchased and made free for all of its users! Nowadays it takes just a few clicks and a bit of patience with the Unity package manager to add Cinemachine to your project. Piece of cake.

The Setup

Once you’ve added Cinemachine to your project the next step is to add a Cinemachine Brain to your Unity Camera. The brain must be on the same object as the Unity camera component since it functions as the communication link between the Unity camera and any of the Cinemachine Virtual Cameras that are in the scene. The brain also controls the cut or blend from one virtual camera to another - pretty handy when creating a cut scene or recording footage for a trailer. Additionally, the brain is also able to fire events when the shot changes like when a virtual camera goes live - once again particularly useful for trailers and cutscenes.

Cinemachine Brain

Cinemachine does not add more camera components to your scene, but instead makes use of so-called “virtual cameras.” These virtual cameras control the position and rotation of the Unity camera - you can think of a virtual camera as a camera controller, not an actual camera component. There are several types of Cinemachine Virtual Cameras each with a different purpose and different use. It is also possible to program your own Virtual Camera or extend one of the existing virtual cameras. For most of us, the stock cameras should be just fine and do everything we need with just a bit of tweaking and fine-tuning.

Cinemachine offers several prefabs or presets for virtual camera objects - you can find them all in the Cinemachine menu. Or if you prefer you can always build your own by adding components to gameObjects - the same way everything else in Unity gets put together.

As I did my research, I was surprised at the breadth of functionality, so at the risk of being boring, let’s quickly walk through the functionality of each Cinemachine prefab.

Virtual Cameras

Bare Bones Basic Virtual Camera inspector

The Virtual Camera is the barebones base virtual camera component slapped onto a gameObject with no significant default values. Other virtual cameras use this component (or extend it) but with different presets or default values to create specific functionality.

The Freelook Camera provides an out-of-the-box and ready-to-go 3rd person camera. Its most notable feature is the rigs that allow you to control and adjust where the camera is allowed to go relative to the player character or more specifically the Look At target. If you’re itching to build a 3rd person controller - check out my earlier video using the new input system and Cinemachine.

The 2D Camera is pretty much what it sounds like and is the virtual camera to use for typical 2D games. Settings like softzone, deadzone and look ahead time are really easy to dial in and get a good feeling camera super quick. This is a camera I intend to look at more in-depth in a future tutorial.

The Dolly Camera will follow along on a track that can be easily created in the scene view. You can also add a Cart component to an object and just like the dolly camera, the cart will follow a track. These can be useful to create moving objects (cart) or move a (dolly) camera through a scene on a set path. Great for cutscenes or footage for a trailer.

“Composite” Cameras

The word “composite” is my word. The prefabs below use a controlling script for multiple children cameras and don’t function the same as a single virtual camera. Instead, they’re a composite of different objects and multiple different virtual cameras.

Some of these composite cameras are easier to set up than others. I found the Blend List camera 100% easy and intuitive. Whereas the Clear Shot camera? I got it working but only by tinkering with settings that I didn’t think I’d need to adjust. The 10 minutes spent tinkering is still orders of magnitude quicker than trying to create my own system!!

The Blend List Camera allows you to create a list of cameras and blend from one camera to another after a set amount of time. This would be super powerful for recording footage for a trailer.

Blend List Camera

The State-Driven Camera is designed to blend between cameras based on the state of an animator. So when an animator transitions, from say running to idle, you might switch to a different virtual camera that has different settings for damping or a different look-ahead time. Talk about adding some polish!

The ClearShot Camera can be used to set up multiple cameras and then have Cinemachine choose the camera that has the best shot of the target. This could be useful in complex scenes with moving objects to ensure that the target is always seen or at least is seen the best that it can be seen. This has similar functionality to the Blend List Camera, but doesn’t need to have timings hard coded.

The Target Group Camera component can act as a “Look At” target for a virtual camera. This component ensures that a list of transforms (assigned on the Target Group Camera component) stays in view by moving the camera accordingly.

Out of the Box settings with Group Target - Doing its best to keep the 3 cars in the viewport

The Mixing Camera is used to set the position and rotation of a Unity camera based on the weights of its children's cameras. This can be used in combination with animating the weights of the virtual cameras to move the Unity camera through a scene. I think of this as creating a bunch of waypoints and then lerping from one waypoint to the next. Other properties besides position and rotation are mixed.

Ok. That’s a lot. Take a break. Get a drink of water, because that’s the prefabs, and there’s still a lot more to come!

Shared Camera Settings

There are a few settings that are shared between all or most of the virtual cameras and the cameras that don’t share very many settings fall into the “Composite Camera” category and have children cameras that DO share the settings. So let’s dive into those settings to get a better idea of what they all do and most importantly what we can then do with the Cinemachine.

All the common and shared virtual camera settings

The Status line, I find a bit odd, it shows whether the camera is Live, in Standby, or Disabled which is straightforward enough, but the “Solo” button next to the status feels like an odd fit. Clicking this button will immediately give visual feedback from that particular camera, i.e. treating this camera as if it is the only or solo camera in the scene? If you are working on a complex cutscene with multiple cameras I can see this feature being very useful.

The Follow Target is the transform for the object that the virtual camera will move with or will attempt to follow based on the algorithm chosen. This is not required for the “composite” cameras but all the virtual cameras will need a follow target.

The Look At Target is the transform for the object that the virtual camera will aim at or will try to keep in view. Often this is the same as the Follow Target, but not always.

The Standby Update determines the interval that the virtual camera will be updated. Always, will update the virtual camera every frame whether the camera is live or not. Never, will only update the camera when it is live. Round Robin, is the default setting and will update the camera occasionally depending on how many other virtual cameras are in the scene.

The Lens gives access to the lens settings on the Unity camera. This can allow you to change those settings per virtual camera. This includes a Dutch setting that rotates the camera on the z-axis.

The Transitions settings allow customization of the blending or transition from one virtual came to or from this camera.

Body

The Body controls how the camera moves and is where we really get to start customizing the behavior of the camera. The first slot on the body sets the algorithm that will be used to move the camera. The algorithm chosen will dictate what further settings are available.

It’s worth noting that each algorithm selected in the Body works alongside the algorithm selected in the Aim (coming up next). Since these two algorithms work together no one algorithm will define or create complete behavior.

The transposer moves the camera in a fixed relationship to the follow target as well as applies an offset and damping.

The framing transposer moves the camera in a fixed screen-space relationship to the Follow Target. This is commonly used for 2D cameras. This algorithm has a wide range of settings to allow you to fine-tune the feel of the camera.

The orbital transposer moves the camera in a variable relationship to the Follow Target, but attempts to align its view with the direction of motion of the Follow Target. This is used in the free-look camera and among other things can be used for a 3rd person camera. I could also imagine this being used for a RTS style camera where the Follow Target is an empty object moving around the scene.

The tracked dolly is used to follow a predefined path - the dolly track. Pretty straightforward.

Dolly track (Green) Path through a Low Poly Urban Scene

Hard lock to target simply sticks the camera at the same position as the Follow Target. The same effect as setting a camera as a child object - but with the added benefit of it being a virtual camera not an actual Unity camera component that has to be managed. Maybe you’re creating a game with vehicles and you want the player to be able to choose their perspective with one or more of those fixed to the position in the vehicle?

The “do nothing” transposer doesn’t move the camera with the Follow Target. This could be useful for a camera that shouldn’t move or should be fixed to another object but might still need to aim or look at a target. Maybe for something like a security-style camera that is fixed on the side of a building but might still rotate to follow the character.

Aim

The Aim controls where the camera is pointed and is determined by which algorithm is used.

The composer works to keep the Look At target in the camera frame. There is a wide range of settings to fine-tune the behavior. These include look-ahead time, damping, dead zone and soft zone settings.

The group composer works just like the composer unless the Look At target is a Cinemachine Target Group. In that case, the field of view and distance will adjust to keep all the targets in view.

The POV rotates the camera based on user input. This allows mouse control in an FPS style.

The “same as follow target” does exactly as a says - which is to set the rotation of the virtual camera to the rotation of the Follow target.

“Hard look at” keeps the Look At target in the center of the camera frame.

Do Nothing. Yep. This one does nothing. While this sounds like an odd design choice, this is used with the 2D camera preset as no rotation or aiming is needed.

Noise

The noise settings allow the virtual camera to simulate camera shake. There are built-in noise profiles, but if that doesn’t do the trick you can also create your own.

Extensions

Cinemachine provides several out-of-the-box extensions that can add additional functionality to your virtual cameras. All the Cinemachine extensions extend the class CinemachineExtension, leaving the door open for developers to create their own extensions if needed. In addition, all existing extensions can also be modified.

Cinemachine Camera Offset applies an offset to the camera. The offset can be applied after the body, aim, noise or after the final processing.

Cinemachine Recomposer adds a final adjustment to the composition of the camera shot. This is intended to be used with Timeline to make manual adjustments.

Cinemachine 3rd Person Aim cancels out any rotation noise and forces a hard look at the target point. This is a bit more sophisticated than a simple “hard look at” as target objects can be filtered by layer and tags can be ignored. Also if an aiming reticule is used the extension will raycast to a target and move the reticule over the object to indicate that the object is targeted or would be hit if a shot was to be fired.

Cinemachine Collider adjusts the final position of the camera to attempt to preserve the line of sight to the Look At target. This is done by moving the camera away from gameObjects that obstruct the view. The obstacles are defined by layers and tags. You can also choose a strategy for moving the camera when an obstacle is encountered.

Cinemachine Confiner prevents the camera from moving outside of a collider. This works in both 2D and 3D projects. It’s a great way to prevent the player from seeing the edge of the world or seeing something they shouldn’t see.

Polygon collider setting limits for where the camera can move

Cinemachine Follow Zoom adjusts the field of view (FOV) of the camera to keep the target the same size on the screen no matter the camera or target position.

Cinemachince Storyboard allows artists and designers to add an image over the top of the camera view. This can be useful for composing scenes and helping to visualize what a scene should look like.

Cinemachine Impulse Listener works together with an Impulse Source to shake the camera. This can be thought of as a real-world camera that is not 100% solid and has some shake. A source could be set on a character’s feet and emit an impulse when the feet hit the ground. The camera could then react to that impulse.

Cinemachine Post Processing allows a postprocessing (V2) profile to be attached to a virtual camera. Which lets each virtual camera have its own style and character.

There are probably even more… but these were the ones I found.

Conclusion?

Cinemachine is nothing short of amazing and a fantastic tool to speed up the development of your game. If you're not using it, you should be. Even if it doesn’t provide the perfect solution that ships with your project it provides a great starting point for quick prototyping.

If there’s a Cinemachine feature you’d like to see in more detail. Leave a comment down below.

A track and Dolly setup in the scene - I just think it looks neat.

C# Extension Methods

Time is one of the biggest obstacles to creating games. We spend a lot of time writing code and debugging that code. And it’s not uncommon to find ourselves writing the same code over and over which is tedious and worse it’s error-prone. The less code you have to write and the cleaner that code is the faster you can finish your game!

Extension methods can help you do just that - write less code and cleaner code with fewer bugs. Which again means you can finish your game faster.

Extension methods allow us to directly operate on an instance rather than needing to pass that instance into a method and maybe best of all we can do this with types that we don’t have access to, such as the many of the built-in types in Unity or maybe a type from an asset from the Asset Store. As the name suggests, extension methods allow us to extend and add functionality to any class or struct.

Automatic Conversion isn’t built in

Automatic Conversion isn’t built in

As a side note, in my opinion, learning game development is all about adding tools to your toolbox and extension methods should be one of those tools. So let’s take a look at how they work and why they are better than some other solutions.

Concrete Example

Local function to do the conversion

Local function to do the conversion

In a past project, I needed to arrange gameObjects on a grid. The grid lattice was 1 by 1 and set on integer values. The problem, or in reality, the pain point comes from positions in Unity being a Vector3 which is made of 3 floats, not 3 integers.

There is a type Vector3Int and I used that struct to store the position of the objects.

But!

A static helper class with a static function is better, but not the best

A static helper class with a static function is better, but not the best

Casting from Vector3 to Vector3Int isn’t built into Unity (the other direction is!). And sure, you could create a conversion operator, but that’s the topic of another post.

Helper Class Call

Helper Class Call

So, when faced with this inconvenience, my first thought, of course, was to write a function that takes in a Vector3, rounds each component and returns a Vector3Int. This works perfectly fine, but that method is inside a particular class which means if I need to do the conversion somewhere else I need to copy the function into that second class. This means I’m duplicating code which generally isn’t a good practice.

Extension method!!!

Extension method!!!

Ok, fine. The next step is to move the function into a static helper class. I do this type of thing all the time. It’s really helpful. But the result is more code than we need. It’s not A LOT more, but still, it’s more than we need.

If this was my own custom class or struc, I’d just add a public function that could handle the conversion, but I don’t have access to the Vector3 struct. Yet, I have some needed functionality that will be used repeatedly AND I want to type as little as possible while maintaining the readability of the code.

And this situation? This is exactly where extension functions shine!

Extension Method Call

Extension Method Call

To turn our static function into an extension method, all we need to do is add the keyword “this” to the first input parameter of the static function. And then we can call the extension method as if it was part of the struct. Pretty easy and pretty handy.

Important Notes

It’s important to note that with extension functions the type that you are extending needs to be the first input parameter in the function. Also, our static extension method needs to be inside a static class. Miss one of these steps and it won’t work correctly.

More Examples

So let’s look at some more examples of what you could do with extension methods. These of course are highly dependent on your game and what you need to do, but maybe these will spark some ideas and creativity.

Need to swap the Y and Z values of a Vector3. No problem!

Swap Y Z.png
Swap Y Z Call.png

Maybe you need to set the alpha of a sprite in a sprite renderer. Yep. We can do that.

Reset a transform? Locally? Globally? Piece of cake.

Transform Reset.png
Transform Reset Call.png

Extension methods also work with inheritance. For example, most Unity UGUI components inherit from UnityEngine.UI.Graphic which contains the color information. So once again it would be easy to create an extension method to change the alpha for nearly every UGUI element.

Graphic Set Alpha Call.png

Now taking another step down the tunnel of abstraction extension methods also work with generics. If you are scared of generics or have no idea what I’m talking about check out my earlier video on the topic.

Either way, let’s imagine you have a list and you want every other element in that list (or some other sorting). One way, and of course not the only way, to do that filtering would be with a generic extension method like so.

Generic Extension Method.png
Generic Extension Method Call.png

That’s it! They’re pretty simple and easy to use, but I’d argue they provide another tool to write simple, cleaner, and more readable code.

Changing Action Maps with Unity's "New" Input System

If you missed my first post (and video) on Unity’s new input system - go check that out. This post will build on what that post explored.

Why Switch Actions Maps?

Yes, I made a really horrible vehicle controller

Yes, I made a really horrible vehicle controller

Action maps define a series of actions that can be contextual.

For example, a 3rd person controller might use one action map, driving a vehicle may use another, and using the UI might use yet another.

With the new input system, it’s easy to control which set of actions (i.e. action map) is active and being used by a player. You can easily toggle off your player’s motion while navigating the UI or prevent the player from casting a spell while riding a horse…

Whatever.

You have more control and the code that gives you that control, while more abstract, is generally far cleaner than it would be with the old input system.

But First, A Problem To Fix

As mentioned in the last post, the simplest implementation of the new input system has each object create an instance of an Input Action Asset. This works great if there is only one object that is reacting to input, but if there is more than one object listening to input (UI, SFX, vehicles, etc) this gets messy. Exponentially more so if you intend on switching action maps as all those objects will need to know which action map is currently in use. Forget one object, and something strange or goofy might start happening - like shooting sound effects while driving a tractor (not that that happened to me - nope, not all).

To be honest, I’m not sure what the best solution for this is. Maybe there is some clever programming pattern - and if there is PLEASE LET ME KNOW - but for now my solution is to fall back and use an input manager.

Why? This allows a single and static instance of the Input Action Asset to be created and accessed by any other class that needs to be aware of player input.

I don’t love this dependence on a manager script, but I think it’s far tidier than trying to keep a bunch of scripts in the scene up to date. The manager stays in charge of enabling and disabling action maps. And! When a map is disabled it won’t invoke events so the scripts that are subscribed to those events will simply have nothing to respond to.

Input Manager

Input Manager Complete Script.png

The input manager is pretty simple and straightforward. It has a public static instance of the Input Action Asset and an action that will get called when the action map is changed.

The real magic happens in the last function.

The ToggleActionMap function is again public and static and will be called by scripts that need to toggle the action map (duh!).

Inside the function, we first check to see if the requested action map is already enabled. If it is we don’t need to do anything. However, if it’s not active, we toggle off all action maps by calling Disable on the Input Action Asset itself. This has the same effect as calling Disable on each and every action in the action map.

Next, we invoke the Action Map Changed event. This allows things like the UI to be aware of changes and give the player a visual indication of the change. This could also be used to toggle cameras or SFX depending on the action map activated. This step is optional, but I think will generally prove to be pretty useful.

The final step is to enable the desired action map. And that’s it. We now have the ability to change action maps! Say what you will about the new input system, but that’s mighty clean!

Examples of Implementation

For my use case, the player can change between a normal 3rd person controller and driving a very janky tractor (the jank is in my control code, not the tractor itself). The change to controlling the tractor happens when the player walks near the tractor and enters a trigger surrounding the tractor. The player can then “exit” the tractor by pressing the escape key or the “north” button on a gamepad.

You can see the player and tractor actions maps.

3rd Person “Player” Action Map

3rd Person “Player” Action Map

Tractor Action Map

Tractor Action Map

Tractor Controller Class.png

Then in the tractor controller class, there are a handful of movement-related variables, but most important is the Input Action variable that will hold a reference to the movement action that is on the tractor action. We get a reference to this Input Action in the OnEnable function by referencing the static instance of the Input Action Asset in the Input Manager class then going through the tractor action map and lastly to the movement action itself.

Also in the OnEnable, we subscribe the ExitTractor function to the “Exit” action. This allows the player to press a button and switch back to the 3rd person controller.

In the OnDisable function, we unsubscribe to prevent any redundancy of calls or errors in the case of the object being turned off or destroyed.

The Exit Tractor function then calls the public static ToggleActionMap function on the Input Manager to change the active action map to the player action map.

Likewise, in the OnTriggerEnter function, the ToggleActionMap is called to activate the tractor action map.

It’s actually pretty simple. Of course, the exact implementation of how and when action maps are changed depends on your game.

Final Thoughts

I don’t love that any class in the game can switch the active action map, but I’m honestly not sure how to get around. The input manager could easily have some filters in the Toggle Action Map function, but that will absolutely depend on the implementation and needs of your game. Or you might be able to come up with some wrapper class that wraps the Input Action Asset and only gives access to the features (likely just the events) that you want to have widely available.

Also, this approach doesn’t directly work for having multiple players since there is only one instance of the Input Action Asset. There would need to be some additional cleverness and that… that I’ll save for another tutorial (maybe).

Unity's New Input System

Version 1.0.2 of the input system was used along with Unity 2020.3

Warning! If you are looking for a quick 5-minute explanation of Unity’s new input system - this isn’t going to be it - and you aren’t going to find one! The new system is more complex than the old system. Especially when it comes to simple things like knowing when the spacebar has been released.

I’m going to do my best to be concise and get folks up and running, but it will take some effort on your part! You will likely need to dive into the admittedly opaque Unity documentation if you have a special use case. It’s just the way it is. Input is a complex topic and Unity has put together a system that can nicely handle that complexity.

So Why Use Unity’s New Input System?

Using Unity’s “NEW” Input system to move, jump, rotate the camera, play SFX, shoot and charge up a power Shot

Using Unity’s “NEW” Input system to move, jump, rotate the camera, play SFX, shoot and charge up a power Shot

I’ve got three reasons. Three reasons I’ve stolen, but they are good reasons to use the new Input System.

If you want players to be able to use multiple devices OR you are developing for multiple platforms the new system makes it very very easy to do so. Frankly, I was shocked how easily I could add a gamepad and easily switch back and forth between it and a keyboard.

It’s Event-Based! When an action is started, completed (performed), or canceled an event can be called. While you will still need to “poll” values every frame for things like player or camera motion, button presses for other bits such as jumping or shooting no longer need to clog an update function! While this adds some perceived complexity - especially if you don’t feel comfortable with events - but it is an awesome feature.

Input debug system! Unity provides an input debugger so you can see the exact values, in real-time, of your system’s input. This makes it so much easier to see if a device is recognized and functioning properly. AND! In the case that you do need to do some polling of an input value (think similar to the old system in an update function), it’s much easier to see what buttons are being pressed and what those input values look like.

So yeah! Those are pretty fantastic reasons. The new input system does take some time and patience to learn - doubly so if you are used to the old system, but hopefully, you’ll agree the effort is worth it.

Setting It Up

Input System Package Manager.png

To get started, you’ll need Unity version 2019.1 or newer and the system is added via the package manager. When importing the system you will likely get a popup with a warning to change a setting. This setting is all about which system Unity will use to get input data from. You can make further changes in Project Settings > Player > Active Input Handling. From there, you can choose to use either the new system, the old system, or both.

Input Warning Trimmed.png

If you can’t get the new system to function, this setting would be worth checking.

Next, we need to create a new “Input Actions” asset. This is done like any other asset, by right-clicking in a project folder or using the asset menu. Make sure to give the asset a good name as you’ll be using this name quite often.

With the asset created you can select it and then in the inspector press “edit asset.” This will open a window specific to THIS input action asset.

So if you have more than one input action asset, you will need to open additional windows - there is no way to toggle this window to another asset. Personally, I found this a bit confusing when first getting started as it feels different than other Unity windows and functionality.

Inside the Input Action Window

This is where all the setup happens and there’s a lot going on! There are way more options in this window than could possibly be covered in this video or even several more videos. But! The basics aren’t too complex and I’m going to try and look at some of the more common use cases.

Input Action Asset Window - Including added Actions for Movement and JUmp

Input Action Asset Window - Including added Actions for Movement and JUmp

On the left, you’ll see a column for “Action Maps.” These are essentially a set of inputs that can be grouped together. Each Input Action asset can have multiple action maps. This can be useful for different control schemes for example if your player can hop in a car or maybe on a horse and the controls will be different. This can also be used for UI controls - so that when a menu is opened the player object stops responding and the controls for a gamepad now navigate through a menu.

To be honest, I haven’t yet figured out a nice clean way to swap action maps but it might be the topic of a future post/video so let me know (comment below) if you are interested in seeing that.

To create a new action map simply press the plus at the top right of the column and give the action map a good name. I’ve called mine “Player.”

The middle column is where our actions get defined. These are not the buttons or keys that will be pressed - those are the bindings - but these are the larger actions that we want the player to be able to do such as move, jump, or shoot.

To get started I’m going to create two actions one for movement and one for jumping.

Each action has an “action type” and a “control type” - you can see these to the right in the image above. These options can easily feel ambiguous or even meaningless as they can seemly have little to no impact on how your game plays - but when you want to really dial in the controls they can be very useful

Action Types.png

Actions types come in three flavors value, button and passthrough. The main difference between these three is when they call events and which events get called.

Link: Unity Action Type Documentation

Value Action

The Value action type will call events whenever a value is changed and it will call the events started, performed, and canceled (more on these events later).

The “started” event will get called when the control moves away from the default value - for example, if a gamepad stick moves away from (0,0).

The “performed” event will then get called each time the value changes.

The “canceled” event will get called when the control moves back to the default value - i.e. the gamepad stick going back to (0,0).

This would seem like a good choice for movement. However, the events are only called when the values change, so it won’t get called if the player holds down the W key or keeps the gamepad stick in the same position. That’s not to say it’s not useful, but there are potentially other problems that need to be solved for creating player motion if this action type is used.

Button Action

The button action type will call events based on the state of the button and the interactions assigned to the action itself. The interactions, which we will get to, will define when the performed and canceled events are called. In the end, the Button action type is what you want when events should be called when a button is pressed, released, or held. So far in my experience, this covers the majority of my use cases and is what I’ll be using throughout this tutorial.

PassThrough

The PassThrough action type is very similar to the value action type. It will call the performed event any time the control changes value. BUT! It will not call started or canceled.

The passthrough action also does not do what Unity calls disambiguation - meaning that if two controls are assigned Unity won’t be smart and try to figure out which one to use. If this sounds like something you might need to know about, check out the Unity documentation.

If your head is starting to spin and your getting lost in the details. That’s fair. This system is far more powerful than the old system, but as a trade-off, there are way more bits and pieces to it.

Interactions

Interaction Types

Interaction Types

I’m not going to go too deep into the weeds on interactions, but this is where we can refine the behavior a bit more. This is where we can control when the events get invoked. We have options to hold, press (which includes release options), tap, slow tap, and multi-tap. All of these interactions were possible with the old system, but in some cases, they were a bit challenging to realize.

For the most part, I found that interactions are fairly self-explanatory with some potentially confusing nuance between tap and slow tap. The documentation while a bit long does a great job of clarifying some of that nuance.

Link: Unity Documentation on Interactions

Processor Types


Processor Types

Processors

Sometimes you need or want to make some adjustments to the input values such as scaling or normalizing vectors. Essentially processors allow you to do some math with the input values before events are called and values are sent out. These aren’t terribly complex and are going to be very use case specific.

Link: Unity Documentation on Processors

Adding Bindings

Still with me? Now that we have our actions set up we need to add bindings - these are the actual inputs from the player! Think key presses or gamepad stick movements. I’m going to create bindings for both the keyboard and a gamepad for all the controls. This is a tiny bit more work, but once we get to the code, the inputs will be handled the same which is really powerful!

Movement

The first binding will be for the keyboard to use the WASD keys for movement. We need to add a 2D Vector Composite. To find this option you’ll need to right-click on the movement action. This will automatically add in four new bindings for the four directions.

Composite bindings essentially allow us to combine multiple inputs to mimic a different input device, i.e. using the WASD in the same way as a gamepad stick. You may notice that there is a mode option, but for our use case either digital option will work.

Notice also, that interactions and processors can be assigned to individual bindings allowing more customization! These interaction and processors work the same for bindings as for actions.

Link: Composite Mode Documentation (scroll down just a bit)

Add 2D vector Composite Binding by right Clicking on the Movement Action

Add 2D vector Composite Binding by right Clicking on the Movement Action

With the WASD binding created we then need to assign keys or the input path. We can do this by clicking on what looks like a dropdown next to “path.” If this dropdown is not present click the T button which toggles between the dropdown and typing.

Then you can select the correct key from the list. OR! Press the listen button and then press the button you want for the binding. It couldn’t be much easier.

Add bindings by search or using the “Listen" functionality

Add bindings by search or using the “Listen" functionality

The second binding will be for the gamepad. You can simply click on the plus for the movement action and choose “Add Binding.” Selecting this binding you will see options to the right. Once again you can use the “listen” option and move the gamepad stick, but it only allows one direction on the stick. Maybe there’s a way around this but I haven’t found it! So select any direction and we’ll edit the path manually to take in all values from the stick.

Once you have a path, click the T button to manually edit the path. From there we’ll remove the direction-specific part. In my case this will look like <Gamepad>/leftStick with this done you can click the T button again and the path should be just the left stick.

Adding the Left Stick Binding

Adding the Left Stick Binding

Jump

I’ll repeat the process of adding bindings for the jump action. Adding in a binding for the spacebar and the “south” button on my gamepad. Unity has been pretty clever here with the gamepad buttons. Rather than give control specific names they have cardinal directions so that the “south” button will work regardless of whether it is an Xbox or Playstation controller.

Now that we have the basic actions and binding implemented. We’re almost ready to get into the code. But first! We need to make sure the asset is saved. At the top right there is a save asset button. This has caught me out a few times, make sure you press it to save changes.

There is also an auto-save feature, which is great until you generate C# code (which will talk about in a minute). In that case, the autosave tends to make the interface laggy and a bit harder to use.

Adding the Jump Binding

Adding the Jump Binding

Implementation

There is a default player controller that comes with the input system. It has its place, but in my opinion, if you’ve come this far it’s worth digging deeper and learning how to use the input system with your own code. It’s also important to know that the input system can communicate by broadcasting messages, drag and drop Unity Events, or my preferred method C# events.

Video Tutorial: Events, Delegates, and Actions!!!

If you aren’t familiar with events, check out my earlier tutorial. Events aren’t easy to wrap your head around at first but are hugely powerful and frankly are at the core of implementing the new input system.

To get access to the C# events we first need to generate a C# code for the actions we just created.

Thankfully, Unity will do that work for us!

In the project folders, select the Input Action Asset that we created at the beginning. In the inspector, you should see a toggle labeled “Generate C# Class”. Toggle this on and press “apply.”

This should create a new C# script in the same location as the input action asset and with the same name - unless you changed the default settings. You can open it up, but there’s no need to edit it or do any work on it so I’m just going to leave it be.

Custom Class

The “Simplest” Implementation of a the New Input System for Player Controller

The “Simplest” Implementation of a the New Input System for Player Controller

Next, we’ll need a custom player controller class.

This class will need access to the namespace UnityEngine.InputSystem.

Then we’ll need two new variables. The first is of the type of our newly created Input Action Asset, in my case this is “Farmer Input Actions.” And the second is of type Input Action and will hold a reference to our movement input action.

You can create a variable for each input action and cache a reference to it - I’ve seen many videos choose to do it this way. I have chosen not to do this with most of the input actions to keep the code compact for the purposes of this tutorial - it’s up to you.

Also, for most event trigger actions you don’t need to reference the input action outside of the OnEnable and OnDisable functions. Which for me lessens the need for a cached reference.

Before we start working with the input actions and events. We need to create a new instance of the Input Action Asset.

I’ve chosen to do this in the Awake function. The fact that this player controller class will have its own instance is important! The Input Action Asset is not static or global!

With the instance created, we need to wire up the events and enable the input actions and this is best done in the OnEnable function.

For the movement input action, I’ll cache a reference and you can see that this is accessed through the instance of the Input Action Asset, then the Player action map, and finally the movement action itself. I am choosing to cache this reference because we will need access to it in the fixed update function.

With the reference cached, we need to enable the input action with the “Enable” function. Do note that there is an "enabled” property that is not the same as the “Enable” function. If you forget to call this function, the input action will not work. Like a few other steps, this one caught me out a few times too.

The steps for the jump input action are similar, but in this case, I won’t be caching a reference. Instead, I will be subscribing a function to the performed event on the jump input action. This subscribed function will get called each time the jump key or button is pressed.

There is NO need to constantly check whether the jump button is pressed in an update function! This is one of the great improvements and advantages of the new system. Cleaner code albeit at the cost of a bit more complexity.

To create the jump function you can do it manually, or in Visual Studio, you can right-click and choose “Quick Actions and Refactoring” and then choose “Generate Method.” This will ensure that the input parameter is of the correct type. Then inside the function, we can simply add a debug message to be able to test it.

The next step to the setup is to disable both the movement and jump input actions. This should be done in the OnDisable function. This may not be 100% needed but ensures that the events won’t get called and thus throw errors if the object is disabled. Also note, that I did not unsubscribe. While in most cases this won’t be a problem or throw an error, but if the object is turned on and off the jump function will get called multiple times. This was spotted by a YT viewer (THANKS DAVE).

The final step for testing is to read the movement values in the fixed update function. I’m using fixed update because I’ll use the physics engine to move and control the player the object. Reading the values is pretty straightforward. To keep things simple, I’ll use another debug statement, and to get the values we simply call “Read Value” on the movement input action, give it a generic parameter of the type Vector2 since we have both X and Y values for movement.

Testing

Testing Input with Debug.png

At this point, we can test out our solution to make sure everything is wired up correctly. To do this we simply need to put our new player controller class on a gameObject and go into play mode.

Pressing the WASD keys or moving the gamepad stick should show values in the console. While pressing the spacebar or the south button on the gamepad should display our jump message.

Whew!

If you’re thinking that was a lot of work to display some debug messages your right. It was. But! We have a system that works for both a keyboard and a gamepad AND the code is really quite simple and clean. While the old system was quick to use the keyboard or mouse, adding in a gamepad was a huge pain, not to mention we would need to code both inputs individually.

With the new system, the work is mostly at the front end creating (and understanding) the Input Action Asset. Leaving the implementation in the code much simpler. Which in my opinion is a worthy trade-off.

So What’s Next?

I still want to look at a few more implementations of the new input system, but frankly, this is getting long already. In the intro GIF you may have noticed a lot more functionality than the project currently has. ALL of the extra functionality is based on what I’ve shown already, but I think is worth covering - in another tutorial.

For now, if you want to see my full implementation of the 3rd person controller (minus the camera) you can find it here on PasteBin. I will transition all the project code to GitHub once the series is complete.

Topics I’d still like to look at:

  • Creating the 3rd Person Controller

  • Controlling a Cinemachine Camera

  • Triggering UI and SFX with the new System

    • Shooting!!

  • “Charging Up” for a power shot

  • Player rebinding during playmode

  • Swapping action maps

    • UI? Boat? Car?

If you’d like to see one or all of those topics, leave a comment below. They’re only worth doing if folks are interested.

Bolt vs. C# - Thoughts with a dash of rant

Bolt vs C Sharp.png

It’s not uncommon for me to get asked my thoughts on Bolt (or visual scripting in general) versus using C# in the Unity game engine. It’s a topic that can be very polarizing, leaving some feeling the need to defend their choice or state that their choice is the right one and someone else’s choice is clearly wrong.

Which is better Bolt or C#?

I wouldn’t be writing this if I didn’t have an opinion, but it’s not the same answer for every person. Like most everything, this question has a spectrum of answers and there is no one right answer for everyone at every point in their game development journey. Because that’s what this is no matter whether you are just downloading Unity for the first time, completing your first game, or a senior engineer at a major studio. It’s a journey.

A Little History

Eight years ago I was leaving one teaching job for another and starting to wonder how much longer I would or could stay as a classroom teacher. While doing a little online soul searching, I found an article about learning to code, which had been on my to-do list for a long time, I bookmarked it and came back to the article after starting the new job.

One of the suggestions was to learn to program by learning to use Unity. And I was in love from the moment I made my first terrain and was able to run around on that terrain. I was in love and I continued to play and learn.

It didn’t take long before I needed to do some programming. So I started with Javascript (Unityscript) as it was easy to read and I found a great series of videos walking me through the basics. I didn’t get very far. Coding took a long time and a lot of the code I wrote was a not-so-distant relative to guessing and checking.

Then I saw Playmaker! It looked amazing! Making games without code? Yes. Please! I spend a few months working with Playmaker and I was getting things to work. Very quickly and very easily. Amazing!

But as my projects got more complicated I started to find the limit of the actions built into Playmaker and I got frustrated. Sure I could make a “game” but it’s not a game I wanted to play. As a result, I’d come to the end of my journey with Playmaker.

So I decided to dive into learning C#. I knew it would be hard. I knew it would take time. But I was pretty sure it was what I needed to do next. I struggled like everyone else to piece together tutorials from so many different voices and channels scattered all over YouTube. After a few more months of struggle, I gave in and spent some money.

As a side note that’s a big turning point! That’s when exploring something new starts to turn into a hobby!

I bought a book. And then another and another. I now have thousands of pages of books on Unity, Blender, and C# on my shelves. Each book pushed me further and taught me something new. Years later and I still have books that I need to read.

After a year of starting and restarting new Unity projects, one of those projects started to take shape as an actual game - Fracture the Flag was in the works. But let’s not talk about that piece of shit. I’m very proud to have finished and published it, but it’s wasn’t a good game - no first game ever is. For those who did enjoy the game - thank you for your support!

With an upcoming release on Steam, I felt confident enough to teach a high school course using Unity. Ironically it would be the first of many new courses for me! I choose to use Playmaker over C# for simplicity and to parallel my own journey. No surprise, my students were up and running quickly and having a great time.

But my students eventually found the same limits I did. I would inevitably end up writing custom C# code for my students so they could finish their projects. This is actually how Playmaker is designed to be used, but as a teacher, it’s really hard to see your students limited by the tools you chose for them to use.

That’s when Bolt popped up on my radar! The learning curve was steeper, but it used reflection and that meant almost any 3rd party tool could be integrated AND the majority of the C# commands were ready to use out of the box. Amazing!

I took a chance and committed the class to use Bolt for the upcoming year. As final projects were getting finished most groups didn’t run into the limits of Bolt, but some did. Some groups still needed C# code to make their project work. But that was okay because Bolt 2 was on the horizon and it was going to fix the most major of Bolt’s shortcomings. I still wasn’t using Bolt in my personal projects, but I very much believed that Bolt (and Bolt 2) was the right direction for my class.

Bolt 2 was getting closer and it looked SO GOOD! As a community, we started to get alpha builds to play with and it was, in fact, good - albeit nowhere near ready for production. I started making Bolt 2 videos and was preparing to use Bolt 2 with my students.

And then! Unity bought Bolt and a few weeks later made it free. This meant more users AND more engineers working to improve the tool and finish Bolt 2 faster.

A Fork in the Road

Bolt2RIP.png

Then higher-ups in Unity decided to cancel Bolt 2. FUCK ME! What?

To be honest, I still can’t believe they did it, but they did. Sometimes I still dream that they’ll reverse course, but I also know that will never happen.

Unity choose accessibility over functionality. Unity choose to onboard more users rather than give current users the tools they were expecting, the tools they had been promised, and the tools they had been asking for.

So what do I mean by that?

For many visual scripting is an easy on-ramp to game development, it’s less intimidating than text-based code and it’s faster to get started with. Plus for some of those without much programming experience, visual scripting may be the easiest or only way to get started with game design.

Now, here’s where I may piss off a bunch of people. That’s not the goal. I’m just trying to honest.

Game development is a journey. We learn as we go. Our skills build and for the first couple of years we simply don’t have the skills to make a complete and polished game that can be solid for profit. In those early days, visual scripting is useful maybe even crucial, but as our projects get more complex current visual scripting tools start to fall apart under the weight of our designs. If you haven’t experienced this yet, that’s okay, but if you keep at game development long enough you will eventually see the shortcomings of visual scripting.

It’s not that visual scripting is bad. It’s not. It’s great for what it is. It just doesn’t have all the tools to build, maintain and expand a project much beyond the prototype stage.

My current project “Where’s My Lunch” is simple, but I wouldn’t dream of creating it with Bolt or any other visual scripting tool.

Bolt 2 was going to bring us classes, scriptable objects, functions, and events - all native to Bolt. While that wasn’t going to bring it on par with C# (still no inheritance or interfaces for starters) it did shore it up enough that (in my opinion) small solo commercial games could be made with it and I could even imagine small indie studios using it in final builds. It was faster, easier to use, and more powerful.

So rather than give the Bolt community the tools to COMPLETE games we have been given a tool to help us learn to use Unity and a tool to help us take those first few steps in our journey of making games.

So What Do I Really Think About Bolt?

Bolt is fantastic. It really is. But it is what it is and not more than that. It is a great tool to get started with game design in Unity. It is, however, not a great tool to build a highly polished game. There are just too many missing pieces and important functionality that doesn’t exist. I don’t even think that adding those features is really Unity’s goal.

Bolt is an onboarding tool. It’s a way to expand the reach and the size of the community using Unity. Unity is a for-profit company and Bolt is a way to increase those profits. That’s not a criticism - it’s just the truth.

Unity has the goal of democratizing game development and while working toward that goal they have been constantly lowering the barrier for entry. They’ve made Unity free and are continuously adding features so that we all can make prettier and more feature-rich games. And Bolt is one more step in that direction.

By lowering the barrier in terms of programming more people will start using Unity. Some of those people will go on to complete a game jam or create an interesting prototype. Some of those people may go on to learn to use Blender, Magica Voxel and C#. And some of those people will go on to make a game that you might one day play.

So yeah, Bolt isn’t the tool that lets you make a game, and it certainly doesn’t allow creating games without code - because that’s just total bullshit - but Bolt is the tool that can help you start on that long journey of making games.

To the Beginner

You should proudly use Bolt. You are learning so much each time you open up Unity. So don’t be embarrassed about using Bolt or other visual scripting tools. Don’t make excuses for it, but do be ready for the day when you need to move on.

You may never make it to that point. You may stay in the stage of making prototypes or doing small game jams and that’s awesome! This journey is really fucking hard. But there may come a day where you have to make the jump to text-based coding. It’s a hard thing to do, but it’s pretty exciting all the same. If and when that day does come don’t forget that Bolt helped you get there and was probably a necessary step in your journey.

To the C# Programmer

If you say visual scripting isn’t coding, then I’m pretty sure by that logic digital art isn’t art because it’s not done “by hand.” Text doesn’t make it coding. Just like using assembly language isn’t required to be a programmer.

Even if you don’t use visual scripting you can probably read it and help others. It’s okay to nudge folks in the direction of text-based coding. It is after all a more complete tool, but don’t be a jerk about it or make people feel like they are wasting their time. You aren’t superior just because you started coding earlier, had a parent that taught you to program, or were lucky enough to study computer science in college. Instead, I think you have a duty to support those who are getting started just like you did many years ago.

To the Bolt Engineers

Ha! Imagine that you are actually reading this.

I know you work hard. I know you are doing your best. I know you are doing good things. Keep it up. You are helping to get more people into game development and that is a good thing for all of us.

One small request? Please put your weekly work log in a separate discord channel so we can see them all together or catch up if we miss a few. The Chat channel seems like one of the worst places to put those posts.

To Unity Management

I’m glad you’ve realized that Unity was a poop show and you are doing your best to fix it. It’s a long process and we expect good things in the future.

BUT! I think you made a mistake with Bolt 2 and you let the larger Bolt community down. It was that same community that helped build Bolt into an asset you wanted to buy. You told us one thing and you did another. You made a promise and you broke it. Just look at the Bolt discord a year ago vs. now. It’s a very different community and those who built it have largely disappeared.

Stop selling Bolt as a complete programming tool. And seriously! There is no video game development without coding. That’s a fucking lie and you know it. If you don’t? That’s a bigger problem.

I am sure that you will make more money with Bolt integrated into Unity than if Bolt 2 had continued. That’s okay. Just don’t pretend that wasn’t a huge piece of the motivation. Be honest with your community. Bolt and other visual scripting tools are stepping stones. It’s part of a larger journey. It’s not complicated. It’s not demeaning. It’s just the truth. We can handle the truth. Can you?

To the YouTuber

If your title or thumbnail for a Bolt video contains the words “without Code” you are doing that for clicks and views. It’s not serving your audience and it’s not helping them make games. You are playing a game (the YT game). So please stop.