Dev Log
Raycasting - It's mighty useful
Converting the examples to use the new input system. Please check the pinned comment on YouTube for some error correction.
What is Raycasting?
Raycasting is a lightweight and performant way to reach out into a scene and see what objects are in a given direction. You can think of it as something like a long stick used to poke and prod around a scene. When something is found, we can get all kinds of info about that object and have access
So… It’s pretty useful and a tool you should have in your game development toolbox.
Three Important Bits
The examples here are all going to be 3D, if you are working on a 2D project the ideas and concepts are nearly identical - with the biggest difference being that the code implementation is a tad different.
It’s also worth noting that the code for all the raycasting in the following examples, except for the jumping example, can be put on any object in the scene, whether that is the player or maybe some form of manager.
The final and really important tidbit is that raycasting is part of the physics engine. This means that for raycasting to hit or find an object, that object needs to have a collider or a trigger on it. I can’t tell you how many hours I’ve spent trying to debug raycasting only to find I forgot to put a collider on an object.
But First! The Basics.
The basics Raycast function
We need to look at the Raycast function itself. The function has a ton of overloads which can be pretty confusing when you’re first getting started.
That said using the function basically breaks down into 5 pieces of information - the first two of which are required in all versions of the function. Those pieces of information are:
A start position.
The direction to send the ray.
A RaycastHit, which contains all the information about the object that was hit.
How far to send the ray.
Which layers can be hit by the raycast.
It’s a lot, but not too bad.
Defining a ray with start positon and the direction (both Vector3)
Raycast using a Ray
Unity does allow us to simplify the input para, just a bit, with the use of a ray. A ray essentially stores the start position and the direction in one container allowing us to reduce the number of input parameters for the raycast function by one.
Notice that we are defining the RaycastHit inline with the use of the keyword out. This effectively creates a local variable with fewer lines of code.
Ok Now Onto Shooting
Creating a ray from the camera through the center of the screen
To apply this to first-person shooting, we need a ray that starts at the camera and goes in the camera’s forward direction.
Then since the raycast function returns a boolean, true if it hits something, false if it didn’t, we can wrap the raycast in an if statement.
In this case, we could forgo the distance, but I’ll set it to something reasonable. I will, however, skip the layer mask as I want to be able to shoot at everything in the scene so the layer mask isn’t needed.
When I do hit something I want some player feedback so I’ll instantiate a prefab at the hit point. In my case, the prefab has a particle system, a light, and an audio source just to make shooting a bit more fun.
Okay, but what if we want to do something different when we hit a particular type of target?
There are several ways to do this, the way I chose was to add a script to the target (purple sphere) that has a public “GetShot” function. This function takes in the direction from the ray and then applies a force in that direction plus a little upward force to add some extra juice.
Complete first person shooting example
The unparenting at the end of the GetShot function is to avoid any scaling issues as the spheres are parented to the cubes below them.
Then back to the raycast, we can check if the object we hit has a “Target” component on it. If it does, we call the “GetShot” function and pass in the direction from the ray.
The function getting called could of course be on a player or NPC script and do damage or any other number of things needed for your game.
The RaycastHit gives us access to the object hit and thus all the components on that object so we can do just about anything we need.
But! We still need some way to trigger this raycast and we can do that by wrapping it all in another if statement that checks if the left mouse button was pressed. And all of that can go into our update function so we check every frame.
Selecting Objects
Another common task in games is to click on objects with a mouse and have the object react in some way. As a simple example, we can click on an object to change its color and then have it go back to its original color when we let go of the mouse button.
To do this, We’ll need two extra variables to hold references to a mesh renderer as well as the color of the material on that mesh renderer.
For this example, I am going to use a layer mask. To make use of the layer mask, I’ve created a new layer called “selectable” and changed the layer of all the cubes and spheres in the scene, and left the rest of the objects on the default layer. This will prevent us from clicking on the background and changing its color.
Complete code for Toggling objects color
Then in the script, I created a private serialized field of the type layer mask. Flipping back into Unity the value of the layer mask can be set to “selectable.”
Then if and else if statements check for the left mouse button being pressed and released, respectively.
If the button is pressed we’ll need to raycast and in this case, we need to create a ray from the camera to the mouse position.
Thankfully Unity has given us a nice built function that does this for us!
With our ray created we can add our raycast function, using the created ray, a RaycastHit, a reasonable distance, and our layer mask.
If we hit an object on our selectable layer, we can cache the mesh renderer and the color of the first material. The caching is so when we release the mouse button we can restore the color to the correct material on the correct mesh renderer.
Not too bad.
Notice that I’ve also added the function Debug.DrawLine. When getting started with raycasting it is SUPER easy to get rays going in the wrong direction or maybe not going far enough.
The DrawLine function does just as it says drawing a line from one point to another. There is also a duration parameter, which is how long the line is drawn in seconds which can be particularly helpful when raycasting is only done for one frame at time.
Moving Objects
Now at first glance moving objects seems very similar to selecting objects - raycast to the object and move the object to the hit point. I’ve done this a lot…
The problem is the object comes screaming towards the camera, because the hit point is closer to the camera than the objects center. Probably not what you or your players want to happen.
Don’t do this!!
One way around this is to use one raycast to select the object and a second raycast to move the object. Each raycast will use a different layer mask to avoid the flying cube problem.
I’ve added a “ground” layer to the project and assigned it to the plane in the scene. The “selectable” layer is assigned to all the cubes and spheres. The values for the layer masks can again be set in the inspector.
To make this all work, we’re also going to need variables to keep track of the selected object (Transform) and the last point hit by the raycast (Vector3).
To get our selected object, we’ll first check if the left mouse button has been clicked and if the selected object is currently null. If both are true, we’ll use a raycast just like the last example to store a reference to the transform of the object we clicked on.
Note the use of the “object” layer mask in the raycast function.
Our second raycast happens when the left mouse button is held down AND the selected object is NOT null. Just like the first raycast this one goes from the camera to the mouse, but it makes use of the second layer mask, which allows the ray to go through the selected object and hit the ground.
We now move the selected object to the point hit by the ray cast, plus for just for fun, we move it up a bit as well. This lets us drag the object around.
If we left it like this and let go of the mouse button the object would stay levitated above the ground. So instead, when the mouse button comes up we can set the position to the last point hit by the raycast as well as setting the selectedObject variable to null - allowing us to select a new object.
Jumping
The last example I want to go over in any depth is jumping, which can be easily extended to other platforming needs like detecting a wall or a slope or the edge of a platform - I’d strongly suggest checking out Sebastian Lague’s series on creating a 2D platformer if you want to see raycasting put to serious use not mention a pretty good character controller for a 2D game!
For this example, I’ve created a variable to store the rigidbody and I’ve cached a reference to that rigidbody in the start function.
For basic jumping, generally, the player needs to be on the ground in order to jump. You could use a trigger combined with OnTriggerEnter and OnTriggerExit to track if the player is touching the ground, but that’s clumsy and has limitations.
Instead, we can do a simple short raycast directly down from the player object to check and see if we’re near the ground. Once again this makes use of layer mask and in this case only casts to the ground layer.
Full code for jumping
I’ve wrapped the raycast into a separate function that returns the boolean from the raycast. The ray itself goes from the center of the player character in the down direction. The raycast distance is set to 1.1 since the player object (a capsule) is 2 meters high and I want the raycast to extend just beyond the object. If the raycast extends too far, the ground can be detected when the player is off the ground and the player will be able to jump while in the air.
I’ve also added in a Debug.DrawLine function to be able to double-check that the ray is in the correct place and reaching outside the player object.
Then in the update function, we check if the spacebar is pressed along with whether the player is on the ground. If both are true we apply force to the rigidbody and it the the player jumps.
RaycastHit
The real star of the raycasting show is the RaycastHit variable.
It’s how we get a handle on the object the raycast found and there’s a decent amount of information that it can give us. In all the examples above we made use of “point” to get the exact coordinates of the hit. For me this is what I’m using 9 times out of 10 or even more when I raycast.
We can also get access to the normal of the surface we hit, which among other things could be useful if you want something to ricochet off a surface or if you want to have a placed object sit flat on a surface.
The RaycastHit can also return the distance from the ray’s origin to the hit point as well as the rigidbody that was hit (if there was one).
If you want to get really fancy you can also access bits about the geometry and the textures at the hit point.
Other Things Worth Knowing
So there’s 4 examples of common uses of raycasting, but there are a few other bits of info that could be good to know too.
There is an additional input for raycasting which is Physics.queriesHitTriggers. Be default this parameter is true and if its true raycasts will hit triggers. If it’s false the raycast will skip triggers. This could be helpful for raycasting to NPCs that have a collider on their body, but also have a larger trigger surrounding them to detect nearby objects.
Next useful bit. If you don’t set a distance for a raycast, Unity will default to an infinite distance - whatever infinity means to a computer… There could be several reasons not to allow the ray to go to infinity - the jump example is one of those.
A very non precise or accurate way of measures performance
Raycasting can get a bad rap for performance. The truth is it’s pretty lightweight.
I created a simple example that raycasts between 1 and 1000 times per frame. In an empty scene on my computer with 1 raycast I saw over 5000 fps. With a 1000 raycasts per FRAME I saw 800 fps. More importantly, but no more precisely measured, the main thread only took a 1.0 ms hit when going from 1 raycast to 1000 raycasts which isn’t insignificant, but it’s also not game-breaking. So if you are doing 10 or 20 raycasts or even 100 raycasts per frame it’s probably not something you need to worry about.
1 Raycast per Frame
1000 Raycasts per Frame
Also worth knowing about, is the RaycastAll function. Which will return all objects the ray intersects, not just the first object. Definitely useful in the right situation.
Lastly, there are other types of “casting” not just raycasting. There is line casting, box casting, and sphere casting. All of which use their respective geometric shape and check for colliders and triggers in their path. Again useful in the right situation - but beyond the scope of this tutorial.
Cinemachine. If you’re not. You should.
So full disclosure! This isn’t intended to be the easy one-off tutorial showing you how to make a particular thing. I want to get there, but this isn’t it. Instead, this is an intro. An overview.
If you’re looking for “How do I make an MMO RPG RTS 2nd Person Camera” this isn’t the tutorial for you. But! I learned a ton while researching Cinemachine (i.e. reading the documentation and experimenting) and I figured if I learned a ton then it might be worth sharing. Maybe I’m right. Maybe I’m not.
Cinemachine. What is it? What does it do?
Cinemachine setup in the a Unity scene
Cinemachine is a Unity asset that quickly and easily creates high-functioning camera controllers without the need (but with the option) to write custom code. In just a matter of minutes, you can add Cinemachine to your project, drop in the needed prefabs and components and you’ll have a functioning 2D or 3D camera!
It really is that simple.
But!
If you’re like me you may have just fumbled your way through using Cinemachine and never really dug into what it can do, how it works, or the real capabilities of the asset. This leaves a lot of potential functionality undiscovered and unused.
Like I said above, this tutorial is going to be a bit different, many other tutorials cover the flashy bits or just a particular camera type, this post will attempt to be a brief overview of all the features that Cinemachine has to offer. Future posts will take a look at more specific use cases such as cameras for a 2D platformer, 3rd person games, or functionality useful for cutscenes and trailers.
If there’s a particular camera type, game type, or functionality you’d like to see leave a comment down below.
How do you get Cinemachine?
Cinemachine in the PAckage Manager
Cinemachine used to be a paid asset on the asset store and as I remember it, it was one of the first assets that Unity purchased and made free for all of its users! Nowadays it takes just a few clicks and a bit of patience with the Unity package manager to add Cinemachine to your project. Piece of cake.
The Setup
Once you’ve added Cinemachine to your project the next step is to add a Cinemachine Brain to your Unity Camera. The brain must be on the same object as the Unity camera component since it functions as the communication link between the Unity camera and any of the Cinemachine Virtual Cameras that are in the scene. The brain also controls the cut or blend from one virtual camera to another - pretty handy when creating a cut scene or recording footage for a trailer. Additionally, the brain is also able to fire events when the shot changes like when a virtual camera goes live - once again particularly useful for trailers and cutscenes.
Cinemachine Brain
Cinemachine does not add more camera components to your scene, but instead makes use of so-called “virtual cameras.” These virtual cameras control the position and rotation of the Unity camera - you can think of a virtual camera as a camera controller, not an actual camera component. There are several types of Cinemachine Virtual Cameras each with a different purpose and different use. It is also possible to program your own Virtual Camera or extend one of the existing virtual cameras. For most of us, the stock cameras should be just fine and do everything we need with just a bit of tweaking and fine-tuning.
Cinemachine offers several prefabs or presets for virtual camera objects - you can find them all in the Cinemachine menu. Or if you prefer you can always build your own by adding components to gameObjects - the same way everything else in Unity gets put together.
As I did my research, I was surprised at the breadth of functionality, so at the risk of being boring, let’s quickly walk through the functionality of each Cinemachine prefab.
Virtual Cameras
Bare Bones Basic Virtual Camera inspector
The Virtual Camera is the barebones base virtual camera component slapped onto a gameObject with no significant default values. Other virtual cameras use this component (or extend it) but with different presets or default values to create specific functionality.
The Freelook Camera provides an out-of-the-box and ready-to-go 3rd person camera. Its most notable feature is the rigs that allow you to control and adjust where the camera is allowed to go relative to the player character or more specifically the Look At target. If you’re itching to build a 3rd person controller - check out my earlier video using the new input system and Cinemachine.
The 2D Camera is pretty much what it sounds like and is the virtual camera to use for typical 2D games. Settings like softzone, deadzone and look ahead time are really easy to dial in and get a good feeling camera super quick. This is a camera I intend to look at more in-depth in a future tutorial.
The Dolly Camera will follow along on a track that can be easily created in the scene view. You can also add a Cart component to an object and just like the dolly camera, the cart will follow a track. These can be useful to create moving objects (cart) or move a (dolly) camera through a scene on a set path. Great for cutscenes or footage for a trailer.
“Composite” Cameras
The word “composite” is my word. The prefabs below use a controlling script for multiple children cameras and don’t function the same as a single virtual camera. Instead, they’re a composite of different objects and multiple different virtual cameras.
Some of these composite cameras are easier to set up than others. I found the Blend List camera 100% easy and intuitive. Whereas the Clear Shot camera? I got it working but only by tinkering with settings that I didn’t think I’d need to adjust. The 10 minutes spent tinkering is still orders of magnitude quicker than trying to create my own system!!
The Blend List Camera allows you to create a list of cameras and blend from one camera to another after a set amount of time. This would be super powerful for recording footage for a trailer.
Blend List Camera
The State-Driven Camera is designed to blend between cameras based on the state of an animator. So when an animator transitions, from say running to idle, you might switch to a different virtual camera that has different settings for damping or a different look-ahead time. Talk about adding some polish!
The ClearShot Camera can be used to set up multiple cameras and then have Cinemachine choose the camera that has the best shot of the target. This could be useful in complex scenes with moving objects to ensure that the target is always seen or at least is seen the best that it can be seen. This has similar functionality to the Blend List Camera, but doesn’t need to have timings hard coded.
The Target Group Camera component can act as a “Look At” target for a virtual camera. This component ensures that a list of transforms (assigned on the Target Group Camera component) stays in view by moving the camera accordingly.
Out of the Box settings with Group Target - Doing its best to keep the 3 cars in the viewport
The Mixing Camera is used to set the position and rotation of a Unity camera based on the weights of its children's cameras. This can be used in combination with animating the weights of the virtual cameras to move the Unity camera through a scene. I think of this as creating a bunch of waypoints and then lerping from one waypoint to the next. Other properties besides position and rotation are mixed.
Ok. That’s a lot. Take a break. Get a drink of water, because that’s the prefabs, and there’s still a lot more to come!
Shared Camera Settings
There are a few settings that are shared between all or most of the virtual cameras and the cameras that don’t share very many settings fall into the “Composite Camera” category and have children cameras that DO share the settings. So let’s dive into those settings to get a better idea of what they all do and most importantly what we can then do with the Cinemachine.
All the common and shared virtual camera settings
The Status line, I find a bit odd, it shows whether the camera is Live, in Standby, or Disabled which is straightforward enough, but the “Solo” button next to the status feels like an odd fit. Clicking this button will immediately give visual feedback from that particular camera, i.e. treating this camera as if it is the only or solo camera in the scene? If you are working on a complex cutscene with multiple cameras I can see this feature being very useful.
The Follow Target is the transform for the object that the virtual camera will move with or will attempt to follow based on the algorithm chosen. This is not required for the “composite” cameras but all the virtual cameras will need a follow target.
The Look At Target is the transform for the object that the virtual camera will aim at or will try to keep in view. Often this is the same as the Follow Target, but not always.
The Standby Update determines the interval that the virtual camera will be updated. Always, will update the virtual camera every frame whether the camera is live or not. Never, will only update the camera when it is live. Round Robin, is the default setting and will update the camera occasionally depending on how many other virtual cameras are in the scene.
The Lens gives access to the lens settings on the Unity camera. This can allow you to change those settings per virtual camera. This includes a Dutch setting that rotates the camera on the z-axis.
The Transitions settings allow customization of the blending or transition from one virtual came to or from this camera.
Body
The Body controls how the camera moves and is where we really get to start customizing the behavior of the camera. The first slot on the body sets the algorithm that will be used to move the camera. The algorithm chosen will dictate what further settings are available.
It’s worth noting that each algorithm selected in the Body works alongside the algorithm selected in the Aim (coming up next). Since these two algorithms work together no one algorithm will define or create complete behavior.
The transposer moves the camera in a fixed relationship to the follow target as well as applies an offset and damping.
The framing transposer moves the camera in a fixed screen-space relationship to the Follow Target. This is commonly used for 2D cameras. This algorithm has a wide range of settings to allow you to fine-tune the feel of the camera.
The orbital transposer moves the camera in a variable relationship to the Follow Target, but attempts to align its view with the direction of motion of the Follow Target. This is used in the free-look camera and among other things can be used for a 3rd person camera. I could also imagine this being used for a RTS style camera where the Follow Target is an empty object moving around the scene.
The tracked dolly is used to follow a predefined path - the dolly track. Pretty straightforward.
Dolly track (Green) Path through a Low Poly Urban Scene
Hard lock to target simply sticks the camera at the same position as the Follow Target. The same effect as setting a camera as a child object - but with the added benefit of it being a virtual camera not an actual Unity camera component that has to be managed. Maybe you’re creating a game with vehicles and you want the player to be able to choose their perspective with one or more of those fixed to the position in the vehicle?
The “do nothing” transposer doesn’t move the camera with the Follow Target. This could be useful for a camera that shouldn’t move or should be fixed to another object but might still need to aim or look at a target. Maybe for something like a security-style camera that is fixed on the side of a building but might still rotate to follow the character.
Aim
The Aim controls where the camera is pointed and is determined by which algorithm is used.
The composer works to keep the Look At target in the camera frame. There is a wide range of settings to fine-tune the behavior. These include look-ahead time, damping, dead zone and soft zone settings.
The group composer works just like the composer unless the Look At target is a Cinemachine Target Group. In that case, the field of view and distance will adjust to keep all the targets in view.
The POV rotates the camera based on user input. This allows mouse control in an FPS style.
The “same as follow target” does exactly as a says - which is to set the rotation of the virtual camera to the rotation of the Follow target.
“Hard look at” keeps the Look At target in the center of the camera frame.
Do Nothing. Yep. This one does nothing. While this sounds like an odd design choice, this is used with the 2D camera preset as no rotation or aiming is needed.
Noise
The noise settings allow the virtual camera to simulate camera shake. There are built-in noise profiles, but if that doesn’t do the trick you can also create your own.
Extensions
Cinemachine provides several out-of-the-box extensions that can add additional functionality to your virtual cameras. All the Cinemachine extensions extend the class CinemachineExtension, leaving the door open for developers to create their own extensions if needed. In addition, all existing extensions can also be modified.
Cinemachine Camera Offset applies an offset to the camera. The offset can be applied after the body, aim, noise or after the final processing.
Cinemachine Recomposer adds a final adjustment to the composition of the camera shot. This is intended to be used with Timeline to make manual adjustments.
Cinemachine 3rd Person Aim cancels out any rotation noise and forces a hard look at the target point. This is a bit more sophisticated than a simple “hard look at” as target objects can be filtered by layer and tags can be ignored. Also if an aiming reticule is used the extension will raycast to a target and move the reticule over the object to indicate that the object is targeted or would be hit if a shot was to be fired.
Cinemachine Collider adjusts the final position of the camera to attempt to preserve the line of sight to the Look At target. This is done by moving the camera away from gameObjects that obstruct the view. The obstacles are defined by layers and tags. You can also choose a strategy for moving the camera when an obstacle is encountered.
Cinemachine Confiner prevents the camera from moving outside of a collider. This works in both 2D and 3D projects. It’s a great way to prevent the player from seeing the edge of the world or seeing something they shouldn’t see.
Polygon collider setting limits for where the camera can move
Cinemachine Follow Zoom adjusts the field of view (FOV) of the camera to keep the target the same size on the screen no matter the camera or target position.
Cinemachince Storyboard allows artists and designers to add an image over the top of the camera view. This can be useful for composing scenes and helping to visualize what a scene should look like.
Cinemachine Impulse Listener works together with an Impulse Source to shake the camera. This can be thought of as a real-world camera that is not 100% solid and has some shake. A source could be set on a character’s feet and emit an impulse when the feet hit the ground. The camera could then react to that impulse.
Cinemachine Post Processing allows a postprocessing (V2) profile to be attached to a virtual camera. Which lets each virtual camera have its own style and character.
There are probably even more… but these were the ones I found.
Conclusion?
Cinemachine is nothing short of amazing and a fantastic tool to speed up the development of your game. If you're not using it, you should be. Even if it doesn’t provide the perfect solution that ships with your project it provides a great starting point for quick prototyping.
If there’s a Cinemachine feature you’d like to see in more detail. Leave a comment down below.
A track and Dolly setup in the scene - I just think it looks neat.
C# Extension Methods
Time is one of the biggest obstacles to creating games. We spend a lot of time writing code and debugging that code. And it’s not uncommon to find ourselves writing the same code over and over which is tedious and worse it’s error-prone. The less code you have to write and the cleaner that code is the faster you can finish your game!
Extension methods can help you do just that - write less code and cleaner code with fewer bugs. Which again means you can finish your game faster.
Extension methods allow us to directly operate on an instance rather than needing to pass that instance into a method and maybe best of all we can do this with types that we don’t have access to, such as the many of the built-in types in Unity or maybe a type from an asset from the Asset Store. As the name suggests, extension methods allow us to extend and add functionality to any class or struct.
Automatic Conversion isn’t built in
As a side note, in my opinion, learning game development is all about adding tools to your toolbox and extension methods should be one of those tools. So let’s take a look at how they work and why they are better than some other solutions.
Concrete Example
Local function to do the conversion
In a past project, I needed to arrange gameObjects on a grid. The grid lattice was 1 by 1 and set on integer values. The problem, or in reality, the pain point comes from positions in Unity being a Vector3 which is made of 3 floats, not 3 integers.
There is a type Vector3Int and I used that struct to store the position of the objects.
But!
A static helper class with a static function is better, but not the best
Casting from Vector3 to Vector3Int isn’t built into Unity (the other direction is!). And sure, you could create a conversion operator, but that’s the topic of another post.
Helper Class Call
So, when faced with this inconvenience, my first thought, of course, was to write a function that takes in a Vector3, rounds each component and returns a Vector3Int. This works perfectly fine, but that method is inside a particular class which means if I need to do the conversion somewhere else I need to copy the function into that second class. This means I’m duplicating code which generally isn’t a good practice.
Extension method!!!
Ok, fine. The next step is to move the function into a static helper class. I do this type of thing all the time. It’s really helpful. But the result is more code than we need. It’s not A LOT more, but still, it’s more than we need.
If this was my own custom class or struc, I’d just add a public function that could handle the conversion, but I don’t have access to the Vector3 struct. Yet, I have some needed functionality that will be used repeatedly AND I want to type as little as possible while maintaining the readability of the code.
And this situation? This is exactly where extension functions shine!
Extension Method Call
To turn our static function into an extension method, all we need to do is add the keyword “this” to the first input parameter of the static function. And then we can call the extension method as if it was part of the struct. Pretty easy and pretty handy.
Important Notes
It’s important to note that with extension functions the type that you are extending needs to be the first input parameter in the function. Also, our static extension method needs to be inside a static class. Miss one of these steps and it won’t work correctly.
More Examples
So let’s look at some more examples of what you could do with extension methods. These of course are highly dependent on your game and what you need to do, but maybe these will spark some ideas and creativity.
Need to swap the Y and Z values of a Vector3. No problem!
Maybe you need to set the alpha of a sprite in a sprite renderer. Yep. We can do that.
Reset a transform? Locally? Globally? Piece of cake.
Extension methods also work with inheritance. For example, most Unity UGUI components inherit from UnityEngine.UI.Graphic which contains the color information. So once again it would be easy to create an extension method to change the alpha for nearly every UGUI element.
Now taking another step down the tunnel of abstraction extension methods also work with generics. If you are scared of generics or have no idea what I’m talking about check out my earlier video on the topic.
Either way, let’s imagine you have a list and you want every other element in that list (or some other sorting). One way, and of course not the only way, to do that filtering would be with a generic extension method like so.
That’s it! They’re pretty simple and easy to use, but I’d argue they provide another tool to write simple, cleaner, and more readable code.