Zach Ebner

Zach Ebner
Programmer
Thomas College Undergrad Student; 2019
Major: Computer Science

What made you want to join this project?:
The internship experience, the opportunity to work in game design, and the programming. Especially the programming due me not have a lot of experience working with C# language which games are build of now of days and also nice to say that I had experience that I could program something outside a classroom setting. Also this be a nice resume piece to be able to present it to someone as an achievement.

Where do you see this project going?:
With a year's worth work on it, I see it in the classroom and being used in the educational sector. This being presented as a technology innovation in education itself as well.

What do you want to learn from this project?:
I want to take away of being on a team especially with the programmers.I haven't work in a group setting within programming yet except within the classroom to work on projects. Also working not just the programming team but also the curriculum and art teams as well. Later down in my career I do see myself working on a team of programmers and the size will depend on the size of the company so this is great experience for that.

Where do yourself in 10 years?:
I see myself in a state wide business or a mid-size company and working with the newest technology out there and seeing the trend lately it might be more cloud based than anything. But it is hard to predict of what I going to do due the ever change in technology so I can do is keep relevant so that I can shift towards that future.

Comments

  1. After the meeting on Thursday, I began working to refresh myself on Unity and C# Items. I will periodically be brushing up on different items as needed, which has been satisfied through the use of web documents and videos. Specifically, I have begun looking at Timeline and Cinemachine to make cut scenes. The major benefits of these cut scenes allow us to focus user attention on a particular aspect or subject. Objects in the scene with animations can be coordinated, objects without animations but with moving parts can be recorded as well. For example, a human would have idle or walking animations while maybe a tank is composed of separate components that can be moved by the developer (the tank turret can swivel). This may be useful for animating the sort of "pop-up " book that serves to be a level choice and introduction mechanic. (The animation of turning a book page and having some level overview content pop up).

    ReplyDelete
  2. Friday, January 12th at 2 pm, the programmers met to discuss the game, direction, and who is working on what. What major question we have is, are we working towards VR or AR? Because in either case, it changes the decision to work in 3rd or 1st person. Additionally, the programmers find that we really need to have a base game down before scaling it into AR or VR, because right now it is too much to try to establish the game itself along with coordinating it into AR/VR mechanics, because none of us have much experience towards that end. Our current direction has us working with this in mind: The game doesn't begin with too much exposition because the middle schoolers will get bored, so have them view the separate levels with their unique characters and circumstances, choose one, and entire that storyline. The layout for choosing the level is a sort of thick book on a table that is like a "pop up" book of sorts to give the overview information. At the very least, each page should give some indication of the level, learning goals, and main character that will guide the player's character. We were thinking the player character is a blank slate really, because it is suppose to be the player in that space. Similar to how Nintendo never wanted to give their Legend of Zelda hero a name or voice, because they wanted the player to project themselves as that hero. (Subsequently, he has a name, but still does not speak.) As I talked with Casey about his clock system, I can probably work in a day/night cycle with it, but later in game development. For now, my focus is on getting something started for the level menu with the book. The book will either need to have animations or moving parts for the desired functionality. - Zachary Ebner 1/13/18

    ReplyDelete
  3. I have additional thoughts on AR that were also discussed among the programmers. The most common access to AR is by phone. AR on the phone is usually utilizing the phone's camera and then placing objects on the phone screen that then look like they are in the environment that the camera lens displays. However, these usually work in that they are attached to the phone and its screen, not the camera's seen environment. Move the phone, and the camera displays what it sees but the game/application objects are moving to the phone, so you could place a little animated dog on a wall. Hololens AR seems to scan the room to get to know the major objects and boundaries and work around them with the application. It can be very touchy about that, but does a decent job. One other form of AR has the loaded scene/objects attach to a physical object that has been placed in the environment (like AR cards). On a phone, the camera's view would include the object with the AR objects bound to it, so moving the camera to a new view usually doesn't show the AR application objects on it because they are tied to the card. Which one will we go for in the end? The Hololens would provide the more encompassing immersion, if it works correctly, but Hololens is still in its developmental stages with no real consumer version and is extremely expensive for a classroom setting. Additionally, we only have the one device, so it will be difficult for constant testing of our game in that environment. Using the phone version has more of a chance that many of the students can participate in it themselves, but it is harder to put our gameplay and story learning elements into it in the manner that we want. Most of these AR applications are referred to as experience based. So it is somewhat interactive but less of a game. The limitation could be described as building the Titanic model and having it displayed on the phone' screen (maybe attached to a scanned object like the AR card), but the interaction would revolve around probably selecting different parts of the ship and reading informational panels. Character models and pictures could be included, along with other object pop ups and maybe some special effects like the ship actually riding the waves and maybe some with hitting the iceberg. This does not include the type of character control , movement, and other mechanics that we are already planning to have as a part of the game. I could be wrong on some of this if there is some new innovation towards this end, but I would still point out that we need a base game as we have little experience in programming towards AR. - Zachary Ebner 1/13/18 - 12:01 PM

    ReplyDelete
  4. 1/21/18 - Commenting on Week of 1/14/18-1/20/18

    Began the week examining the project and brainstorming level selection ideas. Found some sources where they help to switch between characters and videos for different types of level selection. One good one that might help towards the pop-up book idea of level selection had examples of the person still controlling the player in a small hub world where different objects held triggers for loading different levels. The player could walk up to these objects and activate the trigger to enter the corresponding level. Character selection options might help me filter through each "book page" as a separate entity allowing access to the new level. In each case, it boils down to swapping scenes and then covering the screen with a canvas for level loading.

    The next few days I set up a personal testing area separate from the proving grounds, because the first time I tried working in the proving grounds, my computer had caching errors and other project code wouldn't compile so I couldn't test mine. Tried to work with my own testing environment scene in the proving ground project, but couldn't run it due to those compilers. Also attempted loading screen and selection menu work; was not fruitful.

    Thursday, I began work with timeline since I figured there would be a specific angle to view the level selecting pop-up book. I can position the camera over a table with the book on it. I can have other books on the table move around and place themselves through recorded timeline clips. I can also set up different camera angles for viewing, switch between active cameras, and use cinemachine for my dynamic camera selection and object focus. I created a test timeline with different free book assets moving around on the table, stacking themselves and positioning a book at the front. Began work with cinemachine virtual cameras, but haven't gotten far yet.

    While still considering pop up book scene for level selection, I found a free asset for page flipping, but I believe it is set up for 2D modes only. Might try it out

    ReplyDelete
  5. 1/22/18 - Made significant progress in understanding cinemachine and coordinating animation clips as well as recording custom ones. I have two major demo scenes. One with a camera pointed at a table where books are moving and repositioning themselves. One where I used the Adam asset pack that unity released to test cinemachine in target models, coordinating model animations, working with virtual cameras, switching between shots and angles, and also some audio. All of which gets sequenced in timeline.

    ReplyDelete
  6. 1/25/18

    Previous night I also tested some standard asset created particle physics. Made somewhat of a decent torch with fire, glowing light, and smoke. Incorporated it into the animated scene. Both cinemachine test scene and animation test scene are imported into the proving grounds. They can be found under Zach's Test Facility in the imported project folder. Two main compiled scenes each have a playable director timeline.

    ReplyDelete
  7. 1/29/18

    Making way in level selection mechanics. Thought of a new idea today as a hub world. Doesn't have to follow through, the concept can be recycled. Using colliders to detect presence of player within the collider area, then, while within the area, the player can hit a button to load next scene. Will be working with this to incorporate timelines that also trigger when in these zones, camera focus onto the specific area, effects, and displaying the level information, all while only within the collider zone. Took a significant amount of time today, but it is working as intended thus far.

    Currently, I have several scenes in the proving grounds on zachary ebner test facility in order to test these functions. When testing, I had to create a build of the game with all the scenes that I wanted to implement for switching to. The build however, failed but I could still test the scene switching. No build of the game seems to be saved as I searched the folder where I selected the executable build to be placed and it was empty. At least I can still test anyway, but the build problem will come back to haunt us later possibly.

    ReplyDelete
  8. 2/10/18

    Made progress on a hub level room idea. Due to project changes, most of this will probably not be used. Possible recycle ideas include a tutorial level for an intro to AR if the students have never experienced before and an overview of how our game will work. For example, the room could go over the controls and let them try out the AR stuff very quickly with simple objects to understand how the rest of the game will go, along with the component of some traditional gameplay and then a switch to AR. Give them a feel for the mechanics. Other than that, I am going forward with work towards other items, like AR mechanics.

    ReplyDelete
  9. 2/19/18 - Beginning on particle work today, since I cannot find the items I was assigned to animate. Last week was full of difficulty as I tried to build AR projects for the iOS which completely failed. Building for iOS requires a macOS and then the iPad to build it to. I couldn't test anything. Will have programmer meeting later today. For particle systems, I aim to make smoke, a sort of glowing that may be produced by the burning coal in the furnace, and possible some fiery wisps that curl out of the oven when coal is thrown in (if that is accurate). Side note- it is extremely hard to work on this project on the macbook, one small screen and no external mouse option without Apple's expensive USB-C dongle attachment.

    ReplyDelete
  10. 2/19/18

    Programmer meeting occurred with Professor P. and all programmers. Updates on progress were given. Personally, I will be animating assets soon. Character models should receive similar skeletons so general animations (like walking and sitting) can be inherited by all characters. Machine assets can easily be animated if built in parts, not one whole model. I will be looking to take care of these animations in the future, as they become available.

    The rest of today I spent with particle systems. I have created a few options for smoke for the funnel towers of the Titanic. I have also created a few option for fire glows, so they can be placed along with the coal in ovens. Will have to tweak how the glow comes out of the opening once furnace model is ready.

    ReplyDelete
  11. 2/20/18

    Uploaded my particle system today to the proving ground. Consists of 4 smoke examples and 3 glowing lights examples for burning fires/coal. All of these can be tweaked. Currently implemented as prefabs. Also, added the capability to affect smoke with windzones, so it can blow in the breeze. Notes, when I uploaded, it was under Meghan Raymond's name because I am using the macbook.

    ReplyDelete
  12. Been working a lot with Mixamo. Determined that I can create somewhat generic animations, but these animations only work with models passed through Mixamo's skeleton creator. So, I can compile a bunch of general animations that should work with most characters that we create, as long as I build a skeleton for that model in Mixamo. This might also be tied with Fuse, since all the characters I tested with were Fuse created, uploaded to Mixamo, and imported from there. For instance, I tried to animate Unity's standard ethan character with the Mixamo animations but this would not work. At least we have options. Additionally, if animations look wonky or weird for one particular model, then I can easily create custom animations for that one in Mixamo that addresses the problems. Note that the custom animations will need to be named clearly for that model as to not cause confusion.

    ReplyDelete
  13. Began character creation with Adobe Fuse. Good for relatively quick modeling of face and body, bad for stylized clothing to match the era. Difficulty creating cloth. Modern Designer is complex. Took a while to figure out a working pipeline from Fuse, Mixamo, and Marvelous designer as far as exporting and importing goes, cannot guarantee that marvelous designer will work with my models.

    ReplyDelete
  14. Still trying to figure out clothing and hair for the models. Unfortunately MD is just to complex and will takes months to fully understand along with the fact that it demands clothes designer knowledge as well. Unity doesn't have the greatest asset tools to make clothing. May try looking into editing and customization of existing fuse clothing, but this may not accurately reflect the historical style. Unfortunately it has taken me a while to come to this conclusion as I have scoured a ton of internet resources to find slim help while others demand a higher knowledge of clothing design.

    ReplyDelete
  15. Cant figure out how to post screenshots to the blog, would place character shot in here to show progress. Basically, getting very close to finished clothes set that can be given to the officers. Will have an officer completely finished, and then some more close behind once I really get a grasp on Photoshop.

    ReplyDelete

Post a Comment

Popular Posts