Saturday, October 23, 2021

A Secret Island Treasure to Inspire Many a Story

https://www.bbc.com/travel/article/20211020-the-rainbow-island-most-travellers-dont-know

With ochre-stained streams, crimson-hued beaches and enchanting salt caves, Iran's Hormuz Island is a geologist's Disneyland.

"You should get a taste of this soil," said Farzad Kay, my tour guide on southern Iran's Hormuz Island, as we stood at the foot of a ruby-red mountain that loomed majestically over the shoreline, engulfing the beach and waves in a crimson shadow. I approached his suggestion with some trepidation, as I was yet to understand this mysterious, mineral-laden landscape.

Set 8km off Iran's coast amid the murky blue waters of the Persian Gulf, Hormuz is a teardrop-shaped shimmering salt dome embedded with layers of shale, clay and iron-rich volcanic rocks that glow in dazzling shades of red, yellow and orange due to the more than 70 minerals found here. Nearly every inch of Hormuz Island's 42 awe-invoking sq km imparts a story of its formation.

According to Dr Kathryn Goodenough, principal geologist at the British Geological Survey who has previously worked in Iran, hundreds of millions of years ago, shallow seas formed thick layers of salt around the margins of the Persian Gulf. These layers gradually collided and interlayered with mineral-rich volcanic sediment in the area, causing the formation of the colourful landmass.

"Over the last 500 million years, the salt layers were buried deeply by younger layers of volcanic sediment. Since the salt is buoyant, over time, it has risen through cracks in the overlying rocks to reach the surface and form salt domes," said Dr Goodenough. She added that these thick layers of salt, many kilometres below the land, are actually present across much of the Persian Gulf area.

This geological makeup has resulted in ochre-stained streams, crimson-hued beaches and enchanting salt caves. In fact, Hormuz is often called the "rainbow island" because of the spectrum of chromatic hues that it exudes. It's also home to what's thought to be the only edible mountain in the world, which Kay was encouraging me to try.

Locals believe that the salt found at the Goddess of Salt mountain has the power to release any negative energy (Credit: Saeed Abdolizadeh/Alamy)

Locals believe that the salt found at the Goddess of Salt mountain has the power to release any negative energy (Credit: Saeed Abdolizadeh/Alamy)

The red soil on the mountain I was standing near, called gelack, is caused by haematite, an iron oxide thought to be derived from the island's volcanic rocks. Not only is it a valuable mineral for industrial purposes, it also plays an important role in local cuisine. Used as a spice, it lends an earthy flavour to curries and goes perfectly with the local bread called tomshi, which means "a handful of something".

"The red soil is used as a sauce," explained Maryam Peykani, Farzad's wife. "This sauce is called soorakh and is spread on flatbread as it is almost cooked. Apart from its culinary usages, the red soil is also used [in paintings by] local artists, dyeing, creation of ceramics and cosmetics."

Beyond the ruby-red mountain, there's plenty else to explore on Hormuz. In the island's west there's a spectacular salt mountain known as the Goddess of Salt. Extending more than a kilometre, its pale caves and sharp-edged walls are covered by shimmering salt crystals that look like the giant columns of a marble palace.

You may also be interested in:
• How a Scottish mountain weighed the planet
• An ancient Roman mystery solved
• The cliff that changed our understanding of time

Locals believe that the salt possesses the healing power to soak up and release any negative energy, and Kay advised me to take my shoes off so my feet touched the salt dome. "The rock salt is known to release immense positive energy," he told me. "After having spent [time] in this valley, you are bound to feel much more invigorated, which is why the valley is also called the Energy Valley."

Similarly, in the island’s south-west is Rainbow Valley, a stunning display of multi-hued soil and vividly coloured mountains in shades of red, purple, yellow, ochre and blue. As I walked, I noticed patches of bright colours forming geometric patterns that glittered and gleamed as the sun's rays hit them.

In the nearby Valley of the Statues, rocks were weathered into fantastical shapes by thousands of years of wind erosion; with a bit of imagination, I could see birds, dragons and other mythical creatures. It was like admiring Earth's very own art gallery.

The island glows in shades of red, yellow and orange due to the more than 70 minerals found here (Credit: Lukas Bischoff/Alamy)

The island glows in shades of red, yellow and orange due to the more than 70 minerals found here (Credit: Lukas Bischoff/Alamy)

Despite the island’s surreal, kaleidoscopic natural colours, most travellers don't know about it. According to the Ports and Maritime Organization of Iran, just 18,000 visitors came here in 2019.

"This natural phenomenon is not fully discovered by world travellers despite its significant tourist attractions, historically and naturally," said Ershad Shan, another local, as I sank my teeth into a spicy, fragrant curry of sardines, red onion, lemon and orange, prepared using soorakh. "If more attention is paid to the infrastructural development of Hormuz, this island can be changed to be an important attraction for tourists."

Locals have started to offer home-cooked meals for tourists and driving rickshaws and motorcycles to transport people around the island. "We feel responsible for doing our bit for Hormuz. It's so rare and is a part of our identity," Shan said. "We feel an urgent need to contribute towards getting the world to take notice of this eco-heritage."

As I devoured my curry, it struck me that while Hormuz is without doubt a geologist's Disneyland, it is the edible soil, which is literally runs through the veins of its inhabitants, that make it truly special.

Geological Marvels is a BBC Travel series that uncovers the fascinating stories behind natural phenomena and reveals their broader importance to our planet.

--

Join more than three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter called "The Essential List". A handpicked selection of stories from BBC Future, Culture, Worklife and Travel, delivered to your inbox every Friday.

Monday, September 20, 2021

REPOST: Narrative design myth-busting: It's not "just writing"

 https://www.gamedeveloper.com/design/narrative-design-myth-buster-1-it-s-not-just-writing

In this ongoing series, narrative designer Matthew Weise confronts misconceptions about narrative design in game development.

Storytelling is as old as video games. Narrative design--the systematic understanding of how story works in games, and the production expertise that goes along with it--is still relatively new. While it’s common to see job ads for ‘narrative designer’ or ‘narrative director’ these days, this was not the case just a few short years ago.

Similar to where ‘game design’ was in the early-2000s, narrative design is an old art but a new (or newly understood) job. This means that, while it has indeed come a long way, there can still be a lot of confusion surrounding it: what it is, what it isn’t, where it overlaps with other disciplines, where it doesn’t. There are still a lot of myths surrounding it. Chances are you’ve heard of some, and may even harbor them without knowing it. 

This can create confusion and false expectations, hurting development and in some cases even derailing projects. This will be a series designed to save you from that, by debunking the most common myths of narrative design in games. First up:

“Narrative Design is just writing.”

Sometimes “narrative design” and “game writing” are used interchangeably. You may have heard this yourself. Companies are looking for a “narrative designer/writer” or someone might say “Our narrative designer is writing the story." While they are certainly related, it is important to remember these are two entirely different jobs, with different skillsets and different tools, that need to work in tandem to achieve a good narrative player experience.

Narrative design is, as the name implies, a type of design, like level design or systems design. The skillset and toolbox is that of a designer: helping create and/or leverage existing mechanics, systems, levels, art, UI, and sound to achieve a desired dramatic experience for the user.

A game writer creates the actual written content for the game, typically player-facing (dialogue, descriptions, menu text, etc.) but also sometimes team-facing (story bibles, character sheets, beat planning documents, etc.). Sometimes the narrative designer is also the game writer. Sometimes they are different people. Sometimes there is a whole team of narrative designers and game writers, each tasked with different things.

Seems simple enough, so why does confusion about it persist? A lot of it has to do with the messy history of narrative and writing in video games, specifically how game narrative has been traditionally siloed in the development process.

Historically, throughout much of the 80s and 90s, game narrative was equated with cutscenes and dialogue (and by implication not mechanics, systems, art, or UI). While many games--from Zork to Ultima to countless others--defied this assumption with their holistic, integrated approach to narrative, a lot of the language used to describe such holistic design was yet to develop, leaving many production teams with a simplistic “game vs. story” dichotomy informing their process. The rise of cinema-like visuals in the late 90s further siloed “story” from “game,” encouraging pipelines developed for film and tv to be dropped wholesale into game production, entrenching the perception that “writers” are the ones who work with the film people to create the movie-like bits, while the team making the actual game is off doing something else.

While a lot of this confusion has been cleared up today, it can still trip up projects, starting with the hiring process. Teams that understand the difference between narrative design and writing will have clear and specific job ads that reflect this, whereas ones that do not will often use fuzzier, non specific language that conflates the two or uses them interchangeably. Companies that know what they are doing know that if you need a writer, you need to hire a writer. If you need a narrative designer, you need to hire a narrative designer. If you need both, you need to hire both. And you are upfront and clear about this in your job ads all the way through your hiring process.

The biggest problem of hiring a ‘narrative designer’ when what you really expect is a writer is it belies a narrow-minded, outdated assumption about what narrative is and can be in your game. If you think narrative design is someone coming in at the end and adding some words and VO around decisions that have already been made in gameplay, art, and UI you are missing that holistic approach to game storytelling that has been at the core of what makes game stories so memorable, endearing, and--above all--unique to players. Your mechanics, your art, and your UI could be doing a lot of the “narrative lifting” as it were. Distributing your storytelling across all aspects of design, not just dialogue, cut-scenes, lore, etc., makes dramatic game worlds richer, fuller, and more resonant with audiences.

A good narrative designer will work with other departments to find out how the UI can express the protagonist’s personality, how a well-designed room can have the same effect as a page of dialogue, and how a game mechanic can express a spiritual and emotional conflict, not just a physical one. And those are just the basics. More advanced forms can result in truly innovative and time-saving features, like smart bark systems that ensure lines don’t become repetitive or ambitious story-generation systems like the one in Shadow of Mordor. Having a strong interdisciplinary narrative design foundation--which includes writing as an important part along with everything else--is what allows you to “level up” your narrative design, so to speak, achieving true innovation in the space that players will remember. This is the ultimate value of understanding how all these parts work together to create the art of narrative design, and what will distinguish your game as an intelligently crafted and efficient piece of work.

***

Matthew Weise is a narrative designer and writer whose work bridges the worlds of games and traditional entertainment with credits including Disney's Fantasia: Music Evolved and The Jury Room from Oscar-winning director Barry Levinson. Weise is former game design director of MIT's GAMBIT Game Lab and currently runs narrative design consultancy Fiction Control.

 

Monday, June 21, 2021

REPOST: What happens when pacifist soldiers search for peace in a war video game

https://aeon.co/videos/what-happens-when-pacifist-soldiers-search-for-peace-in-a-war-video-game

What happens when pacifist soldiers search for peace in a war video game 

https://aeon.co/videos/what-happens-when-pacifist-soldiers-search-for-peace-in-a-war-video-game

For as long as there have been wars, there have been soldiers refusing to fight in them. The experimental short How to Disappear examines the history of military desertion via the online war video game Battlefield V (2018), drawing wry and provocative contrasts between digital and real-life combat. 

Created by the Austrian art collective Total Refusal, this thoughtful film also asks: what does it mean that you can’t desert within this computer-generated world? 

Unable to abandon the battlefield or surrender within Battlefield V, Total Refusal invents new and unusual forms of pacifistic disobedience for their user-controlled soldiers. Their imaginative tactics walk a thin line between trolling other players and challenging the frameworks and mores of this peculiar space. 

By putting aside the intended gameplay, the group explores muddy moral questions of play, patriotism and war in a society where lines between digital and real environments grow more blurred by the day.

For more from Total Refusal’s Leonhard Müllner and Robin Klengel, watch Operation Jane Walk.

 

Tuesday, May 11, 2021

REPOST: The Power and Pitfalls of Gamification / Katy Milkman Ideas 05.04.2021 08:00 AM

 When tech companies first adopted the technique, there was hardly any science supporting it. Now researchers know when gamelike features help—and when they hurt.

Illustration of gold trophy topped with bored office worker
Illustration: Sam Whitney; Getty Images

This Story is adapted from How to Change: The Science of Getting from Where You Are to Where You Want to Be by Katy Milkman.

When you walk 10,000 steps in a day, your Fitbit rewards you with a jiggle and some virtual fireworks, giving you a reason to pause and smile with pride. When you practice a foreign language on Duolingo multiple days in a row, you earn a “streak” and are encouraged to maintain it, giving you an extra reason to strive for repetition. 

When companies, teachers, coaches, or apps add features such as symbolic rewards, competition, social connections, or even just fun sounds and colors to make something feel more like play, they’re relying on “gamification” to enhance an experience that might otherwise be dull. I’d wager that most of the apps on your phone use some element of gamification, but we also see gamification in our workplaces and from our health insurers.

Gamification first took off more than a decade ago. At the time, there wasn’t much evidence for its value; the concept just seemed to make sense. Business consultants promised organizations that gamifying work could more effectively motivate employees, not by changing their work itself, but by changing its packaging, and making goal achievement a bit more exciting as a result (“Yes! I earned a star!”). 

Technology companies like Cisco, Microsoft, and SAP, for instance, found ways to gamify everything from learning social media skills, to verifying language translations, to boosting sales performance.

Today, thanks to science, we know a lot more about when gamification really works, and what its boundaries seem to be. Beyond the gamified apps and software we use to learn new skills, companies like Amazon and Uber now deploy it to boost worker productivity. But to get the results we seek, in our own lives and in the workplace, it’s important to understand when gamification will work—and when it will only make matters worse.

In 2012, Jana Gallus, a brilliant young economist studying for her doctorate at the University of Zurich, learned of a problem plaguing Wikipedia—and saw an opportunity to run an early test of the value of gamification. Despite the popularity of the 50-million-entry online encyclopedia available in over 280 languages, Gallus discovered that its top performing editors were leaving in droves. 

And since the so-called Wikipedians who keep the site’s articles on everything from Game of Thrones to quantum mechanics accurate and up to date don’t get paid a dime, the organization needed to find a way to keep its top editors engaged with the sometimes-monotonous task of curating online content without offering them money.

In the hopes of reducing turnover, Wikipedia let Gallus run an experiment with 4,000 new volunteer editors. Based on the flip of a coin, she told some deserving Wikipedia newcomers that they had earned an accolade for their efforts, and their names were listed as award winners on a Wikipedia website. They also received either one, two, or three stars, which appeared next to their username, with more stars allocated to better performers. 

Other newcomers who had contributed equally valuable content to Wikipedia but came out on the other end of the coin flip got no symbolic awards (and weren’t told that such awards existed). Gallus thought the awards would make a monotonous task feel a bit more like a game by adding an element of fun and praise for a job well done.

She was right. The volunteers who received recognition for their efforts were 20 percent more likely to volunteer for Wikipedia again in the following month and 13 percent more likely than those who earned no praise to be active on Wikipedia a year later.

Examples like this one might make gamification seem like a no-brainer: Why wouldn’t a corporation want to make work more fun? Despite Gallus’ exciting results, more recent research shows that as a top-down strategy for behavior change, gamification can easily backfire. Two of my Wharton colleagues—Ethan Mollick and Nancy Rothbard—ran an experiment that proved just that. 

It involved several hundred salespeople who had the somewhat boring job of reaching out to businesses and convincing them to offer coupons for discounted products or services that were then sold on their company’s website (think Groupon). The salespeople earned commissions for each coupon eventually sold online.

In an attempt to make this more exciting, Mollick and Rothbard worked with professional game designers to create a basketball-themed sales game. Salespeople could earn points by closing deals with customers, with more points awarded for bigger deals. Sales from warm leads were called “layups,” while cold calls were dubbed “jump shots.” 

Giant screens on the sales floor displayed the names of top performers and showed occasional basketball animations like a successful dunk. Regular emails updated the “players” on who was winning, and when the game was over, the winner got a bottle of champagne.

Employees on just one sales floor participated; the rest were left out. My colleagues then compared the trajectories of salespeople who played the game with those who didn’t. Though they’d had high hopes, Mollick and Rothbard were surprised to find that playing the game didn’t improve sales performance, and it also didn’t improve the way salespeople felt at work. But digging into their data further revealed a very interesting pattern.

The researchers had asked everyone in their game a set of questions: Did people follow the game? Did they understand the rules? Did they think it was fair? These questions were designed to measure which salespeople had “entered the magic circle,” meaning that they agreed to be bound by the game’s rules rather than the normal rules that ordinarily guide their work. After all, if people haven’t entered a game mentally, there’s no real point to it.

Sure enough, the salespeople who felt that the basketball game was a load of baloney actually felt worse about work after the game was introduced, and their sales performance declined slightly. The game benefited only the salespeople who had fully bought into it—they became significantly more upbeat at work.

My colleagues argue that their study highlights a common mistake companies make with gamification: Gamification is unhelpful and can even be harmful if people feel that their employer is forcing them to participate in “mandatory fun.” Another issue is that if a game is a dud, it doesn’t do anyone any good. Gamification can be a miraculous way to boost engagement with monotonous tasks at work and beyond, or an over-hyped strategy doomed to fail. What matters most is how the people playing the game feel about it.

Gamification may have worked so beautifully at Wikipedia in part because Wikipedians don’t get paid but instead come to the site as volunteers. And it’s relatively safe to say that volunteers for any organization want to be there and want to be productive, or else why would they be volunteering? 

Wikipedia editors devote time to the world’s largest online encyclopedia because they’re intrinsically motivated to help share knowledge widely, just as volunteers for the Nature Conservancy want to help the environment. So Wikipedians naturally have the goal that the site’s awards are designed to reinforce.

At its best, gamification seems to work when it helps people achieve the goals they want to reach anyway by making the process of goal achievement more exciting. When people fully buy into a game, the results can be impressive, durably improving volunteers’ productivity, boosting worker morale, and even, as seen in one recent study, robustly helping families increase their step counts. But gamification can tank when players don’t buy in. 

If a game is mandatory and designed to encourage people to do something they don’t particularly care to do (like achieving an outstanding record of attendance at school), or if it feels manipulative, it can backfire. Amazon seems to understand this: They’ve kept their gamification program entirely optional so employees who enjoy it can use it, but it isn’t imposed on anyone.

This latest science suggests it makes lots of sense for apps to continue gamifying our achievements, so long as they’re promoting goals we’re intrinsically eager to reach. But when it comes to using gamelike features to promote change we might not find so appealing, gamification doesn’t seem to be a workaround for more substantive solutions. 

While not every context is the right one, under certain conditions, gamification can make pursuing your aspirations feel more like play. And that is a powerful tool in any personal or professional quest for change.


Adapted from How to Change: The Science of Getting from Where You Are to Where You Want to Be by Katy Milkman; foreword by Angela Duckworth, in agreement with Portfolio, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © Katherine L. Milkman, 2021.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.

Thursday, April 8, 2021

American Cinematography Magazine / Game On: Game-Engine Technology Expands Filmmaking Horizons / August 14, 2020 / Noah Kadner

https://ascmag.com/articles/game-on-game-engine-technology

American Cinematography Magazine / Game On: Game-Engine Technology Expands Filmmaking Horizons / August 14, 2020 / Noah Kadner 

--reposted--

“What directors want are iterations. They want to find all the possible challenges early, so the faster you can see a result, understand the issues and create another iteration, the better. Real-time engines make that process happen instantly. It’s just like playing a video game.”

https://cms-assets.theasc.com/Game-On-1_TTF_PATHFINDER_TR_01_PNG-copy.jpg?mtime=20200814090957

At top, for Season 8 of HBO’s Game of Thrones, The Third Floor offered real-time rendering during production — which, for example, helped represent a dragon (pictured here) as shots were composed, framed and performed. The company also provided virtual versions of story locations for preproduction “scouting.”

The Mandalorian image by François Duhamel, SMPSP, courtesy of Lucasfilm Ltd. Cine Tracer image courtesy of Matt Workman/LED volume photo by Mike Seymour. The Lion King unit photography by Michael Legato, courtesy of Disney Enterprises, Inc. Game of Thrones image courtesy of HBO and The Third Floor, Inc.

Game engines such as Unreal Engine and Unity were, as their umbrella term implies, originally designed for the development of real-time applications, aka video games. In recent years, advances in both hardware and software have ushered these engines into the purview of cinematographers. Game engines are now used to create many forms of visualization for filmmakers, including previs, techvis and postvis — and even final in-camera movement and imagery for such productions as The Lion King (AC Nov. ’19; see sidebar below) and The Mandalorian (AC Feb. ’20). 

For The Mandalorian, an LED-wall system known as “the Volume” allowed the filmmakers to capture actors in a photo-real (or nearly so) environment, in-camera and in real time, with the aid of Epic Games’ Unreal Engine.

Traditional 3D animation applications such as Autodesk’s Maya and 3ds Max have long played a significant role in visual effects. Their comparatively slower performance, however, has seen their benefit focused more on pre- and postproduction, where time is somewhat less critical than in live-action production. These 3D apps prioritize final image quality over performance; as a result, image render times can easily stretch into hours or even days per frame, depending on the complexity of the shot and the computational power of the hardware.

By contrast, game engines were initially optimized for speed first and image quality second, in order to support gameplay in real-time, often at frame rates of 60 frames per second or more. And in the past few years, major technical advancements in the graphics-processing unit (GPU) of computers have enabled such engines to render production-quality imagery while maintaining their real-time speed. 

Commensurate with these hardware improvements, developers such as Epic Games (creators of Unreal Engine) and Unity Technologies (creators of Unity) have optimized their software for direct inclusion into the production pipelines of features and television. These changes are intended to support the crossover between traditional cinematography and computer-generated imagery — a dynamic that can serve filmmakers in a whole host of ways.

Matt Workman, a cinematographer and software developer, is working to erode even further the boundaries between filmmaking and CG with his creation of Cine Tracer, a real-time cinematography simulator built with Unreal Engine. The application — offered directly to filmmakers — enables the viewing of real-world camera and lighting equivalents in simulated, user-designed movie sets to produce highly accurate shot visualization.

Director of photography and previs artist Matt Workman’s real-time “cinematography simulator” Cine Tracer is powered by Unreal Engine. (Screen capture.)
Director of photography and previs artist Matt Workman’s real-time “cinematography simulator” Cine Tracer is powered by Unreal Engine. (Screen capture.)

“My background includes about 10 years of traditional cinematography, mostly in commercials around New York City,” Workman says. “During that time, I worked with a lot of visual-effects companies on effects-heavy commercials — so I started creating previs tools to communicate in 3D and plan with the teams. A couple of years ago, I started developing Cine Tracer to handle that workflow more efficiently. I [designed] it as a video game [that’s controlled similarly to playing a third-person shooter game], but it’s intended to help filmmakers quickly visualize their shots. 

“I went to school for computer science, but I’ve always been tinkering with 3D,” he says. “Luckily, it’s 2020 and there’s YouTube, so the amount of available free education is incredible, as long as you have the time and the patience to learn. 

“Most cinematographers who are on the technical side pick up [game-engine-powered previsualization] very quickly. If you want to add light coming through the window, the steps to get there are very quick. It’s the same way you do it in the real world.”

Workman at a demo of an LED Volume. The realistic background is actually an image that appears on the LED wall.
Workman at a demo of an LED Volume. The realistic background is actually an image that appears on the LED wall.

Regarding the primary advantage that real-time engines have over the more traditional computer-animation software, Workman notes, “The iteration time is much faster. If you want to see a camera move [for a specific shot] in order to determine, for example, what it would look like if you start close and then pull out wide — to see that change with Maya, you’re taking up to a couple of hours to render maybe 120 frames at high quality. In Unreal, that change happens instantly.” 

Visualization studios like The Third Floor have leveraged Unreal and other real-time engines on projects for the past several years to design previs, techvis and postvis services, and other forms of animation used in various phases of the production chain. The Third Floor’s credits span multiple Marvel movies, The Rise of Skywalker, (AC Feb. ’20) and other recent Star Wars films, as well as popular episodic series such as The Mandalorian and Game of Thrones (AC May ‘12 and July ‘19).

Casey Schatz, The Third Floor’s head of virtual production, has worked on such projects as Thor: Ragnarok (AC Dec. ‘17), Gemini Man (AC Nov. ‘19) and The Mandalorian, and has helped innovate everything from flame-throwing motion-control robots to real-time virtual eyelines. 

In addition to the obvious benefits of real-time rendering, Schatz, sees these software and hardware innovations as facilitating greater direct collaboration between studios like The Third Floor and cinematographers. “Historically, as previs creators, we were brought in very early, often before the DP was even hired,” he says. “So a certain amount of work had already been done, and when the cinematographer finally came on, they often felt like they were just painting by numbers. No one in visual effects wants that approach. We’re all trying to blend in with the wheel of filmmaking that’s existed since the Lumière brothers.

“Just a few years ago, game engines weren’t as conducive to moviemaking. Unreal’s virtual camera didn’t have a focal length or a film back [aka aperture gate] — it just had a field of view. Now there’s focal length, film backs, depth of field, f-stops, ISO and shutter speed. Epic even added the ACES color workflow into the rendering pipeline.

“A respect for and acknowledgment of traditional filmmaking has made its way into the software,” Schatz adds. “So you can say, ‘I’m shooting anamorphic with Panavision Primos,’ and we’ll have a menu of those exact focal lengths so that you can’t previsualize a focal length that doesn’t exist in your real lens kit.

“The goal has always been that even someone that has never touched a computer before, but is a remarkable cinematographer, can sit down next to a computer artist and talk in the language that they’re comfortable with — f-stops, T-stops, shutter speeds, film ISOs, grain, bounce light, diffuse light — the traditional cinematography terms that have existed for more than 100 years.” — Casey Schatz, The Third Floor’s head of virtual production 

“I’m working on the Avatar sequels now using Gazebo, Weta Digital’s proprietary real-time engine,” he says. “Russell Carpenter [ASC], the movies’ cinematographer, sat down with the lighters before we did any of our live action in New Zealand. Together they set the tone, the mood, the general key-light direction, the key-to-fill ratio, et cetera; all of this was done using  [cinema terminology] Russell is accustomed to. Thus, the line in the sand between traditional cinematography and computer graphics is disappearing more and more every day.”

Indeed, the rendering time of high-resolution interactive imagery has advanced to the point that it can actually appear onscreen as-is or with minimal adjustments in post — as employed, for example, on The Mandalorian.“That was our goal,” said Greig Fraser, ASC, ACS (who shot the Disney Plus Star Wars series along with Barry “Baz” Idoine), as reported in AC’s February 2020 issue. “We wanted to create an environment that was conducive not just to giving a composition line-up to the effects, but to actually capturing them in real time, photo-real and in-camera, so that the actors were in that environment in the right lighting — all at the moment of photography.”

When asked which specific advancements in real-time engines have pushed forward their synergy with traditional cinematography, Schatz explains that the ability to simulate bounce light, something so fundamental to traditional cinematography, is a game changer because only recently could this happen in real time or even close to it. “[During a previs session] a traditional cinematographer could be looking at a shot and say, ‘If we put a Kino Flo 6 feet away and add a bounce card, what would be the result?’ We can now show that result very quickly and accurately. Prior to these advancements, computer lighting was more analogous to theatrical lighting; you could aim a light and cast a shadow, but then you would have to cheat a bounce light by adding other lights to the sides at lower intensities.”

One of Schatz’s Third Floor colleagues, real-time developer Johnson Thomasson (The Mandalorian, Venom, Godzilla: King of the Monsters [AC June ‘19]), is quite directly involved with the intersection of live-action cinematography and real-time animation — specifically via motion capture and “virtual-camera sessions.” 

“One of the major benefits to real-time animation is ‘practice time’ for the filmmakers,” Thomasson says. “We worked on Christopher Robin, and director of photography Matthias Koenigswieser was able to use a virtual camera rig, playing back animation from the film and recording his camera motion so he could practice operating. He was able to directly experience the size difference between 12-inch tall Piglet and [for scenes set in the title character’s early years] 4-foot-tall Christopher Robin.

“It was a real challenge framing both of them, and something he hadn’t considered before coming to our virtual-camera sessions,” Thomasson continues. “It allowed Matthias to design his compositions ahead of the actual production. He was shooting with an empty frame [aka, a clean plate] on the day, but having rehearsed virtually, he already knew what the right framing felt like. When directors and DPs go through a virtual-camera session, they discover new ideas, and they’re exploring, expanding and coalescing their creativity.

“Another benefit is the physical representation of depth of field, which has never been rendered well in previs in the past,” Thomasson adds. “Unreal’s depth-of-field camera model is based on real-world cameras. So a cinematographer can ask which stop we’re at in a virtual-camera session and get an answer that reflects a physically accurate visual model. In my experience, when DPs learn about that capability, they want to take advantage of it, because depth of field is one of the strongest tools in their toolset for communicating their choices in cinematic language early on in preproduction.

“For directors who are not veterans of giant visual-effects tentpole films, it’s a new experience when they first get to the set. But [prepping] in a low-pressure, small-audience situation, and exploring and practicing via [real-time interactive previs], prepares them for the set like nothing else could.”

Looking toward the future, Schatz sees game engines becoming further entwined with live-action cinematography. “The hardware and software are going to [continue to advance], and it might almost become indistinguishable in terms of which imagery is real-time and which isn’t,” he says. “This is in service of the story and not to show off the technology. The motto of the Previsualization Society is ‘fix it in pre.’ The more creative decisions you can interactively figure out before you get on set, the better.

“The goal has always been that even someone that has never touched a computer before, but is a remarkable cinematographer, can sit down next to a computer artist and talk in the language that they’re comfortable with — f-stops, T-stops, shutter speeds, film ISOs, grain, bounce light, diffuse light — the traditional cinematography terms that have existed for more than 100 years.”

Workman adds, “What directors want are iterations. They want to find all the possible challenges early, so the faster you can see a result, understand the issues and create another iteration, the better. Real-time engines make that process happen instantly. It’s just like playing a video game.”


Prep Becomes Production

Society members Caleb Deschanel (wearing goggles) and Robert Legato (right), alongside Magnopus virtual-production producer A.J. Sciutto, at work on the photo-real animated feature The Lion King, which was produced with the aid of the Unity game engine.

Game-engine technology can provide cinematographers with ways to previsualize in a simulated filmmaking environment — but it can also serve as a filmmaking medium in itself. With the aid of game-engine tech, veteran cinematographer Caleb Deschanel, ASC leveraged his extensive traditional cinematography experience to capture The Lion King with entirely virtual characters, and settings as well (save for a single shot). Making this possible was visual-effects supervisor Robert Legato, ASC and a team of technicians and artists at Magnopus, who coupled the Unity game engine with various traditional dollies, cranes, tripods and other camera-movement tools to enable Deschanel to manually operate virtual cameras while directly interacting with live animation. 

“The essential thing for me in filming this way is having enough visual detail so I can make the same kind of decisions I always make on a set,” Deschanel tells AC. “That informs how you compose the shot and how you light it.” Though not yet photo-real — as the process to make them so would unfold later at MPC in London — the filmmakers’ subjects were imbued with enough detail by the real-time interactive system “that you can read their emotions and understand how close or how wide you need to be,” Deschanel says.

Legato — who was a key member of the virtual-camera crew in addition to his role in developing the technology — adds, “We’re essentially motion-capturing the Steadicam operator attached to the Steadicam or a dolly grip attached to the dolly. With the game engine, everything is live and under your control. You just walk over to the place that seems the most appropriate for you to film it. And because it’s live, you can say, ‘Let me get a little lower or a little higher, or let me try this same position with a 20mm instead of a 24.’ You’re tapping into your on-set intuition, which is ultimately years and years of experience, instead of overintellectualizing it.”

With these tools at their disposal, Deschanel and crew could not only capture the motion and composition of their virtual cinematography, but make visual design choices as well. “I would sit with [lighting director] Sam Maniscalco, who was my gaffer, with the files of all the sets,” Deschanel recalls. “We had a choice of 350 different skies to give us the right mood for every scene. It was just like being a cinematographer [in a traditional environment], but having far more control than you normally would. On a [traditional set], you don’t have control over the clouds and sky, so you have to follow the sun throughout the day. It was exciting and a lot of fun — I was really surprised.” — Noah Kadner

From left: The Lion King director Jon Favreau, Deschanel and production designer James Chinlund explore the previsualized world.
From left: The Lion King director Jon Favreau, Deschanel and production designer James Chinlund explore the previsualized world.

 

 

Saturday, February 27, 2021

REPOST: What Every Game Designer Should Know About Human Psychology [02.25.21] - Michael Moran

https://www.gamecareerguide.com/features/1987/what_every_game_designer_should_.php

Discover more from Informa Tech
  • From door handles to coffee mugs to fighter jet cockpits, User Experience shows up in everything you interact with.

    Every human-made object has been designed, based on either what will be easier for the user or what was easiest for the manufacturer. (Whenever using something feels frustrating or confusing, you can be sure it was the latter!)

    But what about the concept of User Experience in the world of video games?

    When our brains process something as complex as a video game, there's a lot going on. Understanding the role that psychology plays within the science of game development is crucial to creating memorable and engaging games.

    In this article, we'll summarize the importance of understanding psychology while crafting the user experience of a video game.

    We'll touch on subjects such as the importance of play-testing, employing affordance in your games, how to use psychology to create usability and engageability, and the role of the "Gestalt" theory in game design.

    Why Design Should Always Focus On The User

    When the design is focused on the perspective of the user, a product becomes much more practical, desirable, and useful.

    A great example of this dates back to fighter pilots during WWII. Exhausted and under pressure, these pilots had a high rate of human error and were at risk of accidentally pushing the wrong button on the dashboard of their aircraft.

    Image for post
    WWII Spitfire Cockpit

    Unfortunately for them, dashboards were not consistent between aircraft. This meant the pilots had to learn a new set-up every time they switched planes.

    This made it even more likely that they would press the wrong button. Therefore, standardized cockpits needed to be developed which would improve the user experience for these pilots.

    This type of thinking can also be applied to video games, ensuring that the design of the game is centered around the user experience. Therefore, by understanding the psychology of the user, you'll be better able to make design decisions tailored to their needs.

    It's also important to remember that there is no such thing as a neutral design. Everything we design will  influence people to use it in one way or another .  This is an important ethical issue to consider, especially when certain retention mechanics can create addictive behaviors and punish disengagement .

    The Importance of Play-Testing

    Every user is different and  our perspective depends on our experience , our history and what is important to us. When designing video games for different types of users, it's not possible to know in advance what every user will bring to the experience. That's why it's essential to have a diverse team of designers with different backgrounds.

    That's also why video game designers "play test" their games. This tests how the game is perceived by the people who will actually be playing it. With this method designing a game becomes  a cycle of action and iteration .

    Designers create a game, then test it to see if it is accomplishing what they wanted to achieve. If insight from audience testing finds the game lacking, it's back to the drawing board to refine with more information. Then the game is tested again, and the cycle continues.

    The design team may have certain goals that they are trying to achieve within the game. However, they will need to iterate on all aspects of the gameplay, from dialogue to visuals to mechanics and more, in order to achieve those goals.

    When it comes to play testing, heres an important tip:  The developer shouldn't be in the room with the play testers . Not only will it make the players feel somewhat awkward and intimidated, it will also make the test less accurate.

    Players tend to make  more  effort to understand a game when the developer is watching than they would if they were playing it at home  (perhaps out of politeness to the person who has put their heart and soul into crafting the game). To get an accurate measure of how many players would simply give up on a game, play testers should be free to play the game by themselves.



 

Game Career Guide

https://www.gamecareerguide.com/