Jump to content

Blogs

 

[WIP] Final Fantasy (Plot Summary)

This is a very skimpy, poorly written and organized summary of the plot to the first set of stories. Part of the challenge was figuring out how to fit all of this into game format without breaking it up into a few titles (YUCK! -.-'). I'm dropping it here because no one looks at my blog, and I'm overly nervous and antsy about getting content off of that older site and over here for safe keeping. I don't even trust my own google docs to not lose data anymore. It's all about redundancy.   I will edit these soon, and try not to let it spoil anything for you if you find it interesting. You don't have to read all the way to end yet.   Note:         Final Fantasy I
Zenobia is a massive, technologically advanced desert metropolis. An empire seeking to conquer and erase the old world. Using their far superior technology they sweep the vast continent committing genocide, establishing new Zenobian cities atop the ruins and corpses. The desert city is divided in two. The core of the city is awe inspiring. Buildings that tower into the sky. Wealth and luxury for every man and woman. Highly advanced technology engrained into daily life. Outside the core, beyond the walls and defenses, the people of the slums suffer in poverty and oppression, living in crudely built structures under horrible conditions.

Anastasia Maria Grace is a resident of the slums. She is an everyday twenty-three year old girl who believes so strongly in the ideals of equality, freedom, and world peace that she is willing to stand against an entity so powerful that any hope of a world free of it has been lost. She and three of her friends (Jin, Willow, Kato) plan an attack on Zenobia Prime HQ, a technology corporation owned by Prince Sarovoc Ducrinus and the sole provider of energy to the entire city. The plan fails and Ana's friends are killed by the faulty explosive timers. She is rescued by a mysterious outsider who calls himself Edge. He parts ways to protect her (as he is being hunted by Zenobian troops), instructing her to meet him at Odessa Inn. With her best friends dead she goes to the only place she can think of. Stryker is ex-Zenobian Special Forces, the owner/operator of*"Rebel Radio", and mentor to Jin. He volunteers to escort her to Odessa.

Leaving Zenobia for the very first time, Ana accidentally embarks on an epic journey that takes her across the world to crusade against the Zenobian Empire.

She meets Juakeem Mohinder, a bedouin warrior and chocobo breeder, and his war bird Shae'elle. He gives her a young, unremarkable chocobo named Boko who is determined to become a worthy war bird.

The group meets Edge and Commander Bastian Fairheart at Odessa Inn. The Inn is a front for a rebel base located beneath the ground. The base is compromised by Zenobian forces and Bastian leads the group out of the desert and into the wilderness. Along their journey they meet Casius Magnus, a nomadic Black Mage who saved Ana from capture back on the road to Odessa. They also meet Lillian Nobunaga, a Swordstress, traveling warrior and the last of her ancient race.

When they reach the coast, they meet Cideon Armstrong, Captain of the Free Airship Valiant and his nephews Biggs and Wedge. Cid reluctantly grants them sanctuary aboard his ship. Ana inspires Cid to stop hiding underwater instead of living and fighting

While helping rebel forces defend a city from Zenobian assault, the group has an encounter with Marrick Cross, a White Mage and exiled Paladin of Whiteguard. They also square off with the Agents of the Chimera Initiative. Victor Chimera, ex-Zenobian Special Forces, ex-Imperial Operative, and war hero. Leo and Lyra Stratta, lethal killing machines trained by the legendary brute Hogo Marks. Tallis "Tank" Cortez, heavy weapons and demolition expert. Callixta "Raven" Ravana, deadly sniper and ace pilot. Marrick attempts to commandeer the Valiant, but is thwarted by the crew. The Zenobian attack fails.

Bastian (through his belief in Ana's ability to inspire otherwise defeated men) suggests that she be taken to Tessafe's Mirror of Fate. A standing mirror with decorative silver framework said to have belonged to Queen Tessafe of Whiteguard. This mystical, enchanted mirror shows any who gaze into it an image of importance. What it shows you varies, and it's not always easy to understand.

One by one the group enters the temple to behold the mirror's message.

(In order, identifying what they saw, and whether it was shown or later revealed)

Biggs (shown): A man in a white mask (Leo Stratta and the man who kills him. He did not encounter Leo at the previous battle, so he does not recognize him)

Wedge (revealed): A knight clad in black armour holding a bloody sword (Kierkess Aventola, and the man who kills him in VIII)

 Seto (shown): A Dragon's eye opening (the awakening of Bahamut)

 Lilly (revealed): A flaming bird (Phoenix)

Bastian (revealed): A soaring white dove (symbolizing Ana, freedom, and hope)

Stryker (revealed): Ana

Casius (revealed): Ana

Ana (revealed): It shows her key fragments of future events taking place within the stories.       Final Fantasy II   Prince Sarovoc Ducrinus assassinates his own brother in order to take the throne as Emperor of Zenobia.

Ana emerges from the temple after hours of the group waiting. The journey continues as Ana drags Cid across the globe, spreading her message and ideals.

The Valiant visits the eastern plains of Turra where Zenobia is burning villages and temples to the ground as they expand into the territory. They meet Banion Warbear, a warrior Monk and hero of Turra. Together they drive Zenobia from the lands of Turra. Banion agrees to travel with Ana in order to learn more about his enemy.

Stryker confronts Edge about using the name Edge Widowmaker telling him he knew the man and he was older than Stryker is. Edge reveals his true identity to be Seto Bobaloo Sobral. He assumed the name and persona after witnessing the ex-Zeno soldier die in a gunfight in the slums of Zenobia.

Marrick across springs himself from the Valiant's holding cells, determined to commandeer the ship and kill it's charming Captain. When he is clearly outgunned and outnumbered, he surrenders. As he is being escorted to cells, he levels with Cid about why he wants his ship so much. He has a tip about a Zenobian plan to take the city of Kiratoma, one of the largest rebel holds in the world. He is trying to reach the city before it falls. Cid locks him up and heads for Kiratoma. When this plan is confirmed, Marrick is released under a close watch.

The men of Kiratoma are defeated, tired, scared and pissed off. They see no point in fighting an impossible battle. Command struggles to keep morale at a reasonable level. The majority of the soldiers are pushing for a surrender. Stryker dresses Ana in full combat gear, as if she were actually going to fight. He marches a terrified Ana past hundreds of soldiers. The point is a strong one: If you cowards won't fight, the women and children will, because someone has to defend this city.

His plan works, and only a few hundred of thousands refuse to fight. With the help of the heroes, Kiratoma is barely defended for the moment and Marrick has become an ally of the group.

Zenobian forces muster again as reinforcements arrive on both sides. The second battle is brutal and rebel forces have no choice but to retreat or die. But not before Stryker shoots Leo in the shoulder with a pistol, dropping him instantly. The group escapes to The Valiant, but Casius has chosen to stay behind and fight the pursuing troops. Ana and Seto strongly protest leaving him but Cid makes a tough call, assuring them he can handle himself.

Using his native martial art combined with fire and ice magic, he begins to fend off the soldiers, casting reflect to neutralize their bullets. But more and more soldiers pile into the clearing...       Final Fantasy III   The group has taken on refugees from the Kiratoma Army and must take them to Andora, the Rebel Capitol.

Cid and Stryker confront Marrick about his White Mage heritage, the reason he wanted to defend Kiratoma being that it's dense, harsh territory was the last defense between Zenobian lines and Whiteguard territory. Marrick is honest about his being born a Paladin of Whiteguard, devoting his life to the crown and being exiled by the Arch Mage for falling in love with his granddaughter. Stryker confesses to Cid and Marrick that he used to be Special Forces, army before that. Devoted his life to helping slaughter the people he's trying to help now. Cid accepts them both as them men they are today.

Andora brings a new hope. Several outlying pro-Andora Nations have mustered their armies and are preparing to attack a Zenobian fortress alongside Andorans. They capture the fortress. During the battle a mysterious man (Havyian Solaris) saves Seto's life, disappearing afterward. Casius arrives during the battle (he was told to go to Andora as the group was leaving).

But Sarovoc Ducrinus approaches from above with Callixta in a helicopter. He takes careful aim with Raven's long range rifle. Seto sees the chopper and moves to reach Ana, yelling her name. A mysterious man (Simeon Ortega) grapples his torso, holding him back. Sarovoc pulls the trigger and a high powered round rips through Ana's chest killing her instantly. The rebel heroine and domestic terrorist is dead...

The group looks on helplessly and in horror. Seto rushes to her side. As this intense, profound emotional maelstrom ignites within Seto, a void of energy forms high above him. A massive sapphire dragon emerges head-first, rotating as it comes forth before extending his wings. Bahamut!

Callixta and Sarovoc hightail it out of there, narrowly dodging Tri Flare attacks from the forgotten Deity. He looms in the sky, circling around and around as Seto holds Ana's body in his arms.
The group captures the mysterious man.

By now Ana has travelled across the world, meeting and inspiring hundreds of thousands of lives. She taught people that strength isn't determined by one's skill with a blade, but by the quality of their character. She also started a world-wide feminine movement, inspiring millions of women and girls to be what they want to be, not what's expected of them.       Final Fantasy IV   A moving funeral is held for Ana at the Mana Tree (a place the group visited together, and where Ana made everyone promise to keep fighting, always, no matter what happens) where she wanted to be buried when she died. Hundreds of rebel vessels and thousands of pilgrims attend the emotional ceremony.

Distant, emotionally void and still covered in Ana's blood, Seto heads home. The group reluctantly follows him deep into the wilderness (after bringing him aboard The Valiant when he collapsed in the wilderness). When they arrive, they are met by Henato Junior, eldest brother of Seto. His father (Henato Senior) tells him that the essence of Bahamut resides in the sapphire pendant around his neck. When he ran away at fifteen, he missed out of his right of passage as a Dragoon prince when he would have been told this and taught how to command him. Henato Senior tells the group that there are other forces in the world similar to Bahamut.

Seto makes it clear to his family that his guests are no longer welcome and the group is forced to leave.

Meanwhile, the arrival of Bahamut has Sarovoc researching this phenomena, and Leo is healed up from his wound.

Without Ana, the group falls apart. Casius returns to his travels. Banion returns to his people, Bastian has long since left, Juakeem and Shae'elle have returned to the Zenobian desert. Having nowhere to go, Marrick and Lilly remain with Cid, Stryker, Biggs, and Wedge. There is a lot of personal struggle and grieving. Characters swapping memories and stories.

*Simeon Ortega**is interrogated extensively but he reveals nothing. Only that the other mystery man is the real bad guy, despite what it all may look like.

They decide to gut up and get back to fighting the Empire. They plan an attack on a Zenobian outpost with some rebel forces. The plan fails and as a sniper, Stryker reluctantly escapes after watching the team captured through the crosshairs of his empty rifle. Marrick unloads both pistols three times before throwing them down and surrendering. The Valiant is ceased and it's crew jailed along with the group.       Final Fantasy V   Lyra Stratta and a company of Special Forces soldiers close in on a frosty stone altar. Lyra closes her eyes, touching the cold surface slowly. She comes out of the remote temple eager to test her new prize. She summons Shiva, the beautiful Goddess of ice, and in a fit of rage and grief, she kills everyone but Lyra using Diamond Dust. (This stems back to the Great War when the Gods were still free)

Cid, Biggs, Wedge, Lilly, and Marrick are in a Zenobian prison.

Stryker travels far and wide searching for Casius. He gets him up to speed, and they head directly for Sobral territory. They manage to be taken prisoner and Seto speaks to them. Stryker tells him the gang is locked up and Seto begins preparing for the journey.

He talks to his father, and brothers Henato and Sedato. He tells them everything. That an entity so evil that they would destroy everything to rebuild in their own image threatens the lives of every free man the world over, and he must leave again to fight it. His father is moved to tears, telling Seto he is proud of him. Seto is the runt, he always lived in the shadow of Henato, a hero of the tribe, and Sedato, now a hero to. Henato Sr. loves all of his sons, but he recognized that Seto is special from birth, that's why he has Bahamut. Even though Neo and ZERO are more powerful, Bahamut is the father of all Gods, the one true God. It's been the duty and destiny of his kin to protect the world and long have they neglected it.

They free their friends (and a lot of rebel soldiers and Andoran civilians) with the help of Jaukeem, his men, and local rebel factions. Casius goes 0 to 100 particularly quick and with considerable force. He leaves a trail of victims through the prison. He finds Boko, who has been chained and beaten but he is freed after Casius kills his tormentors. The Valiant is located and taken back.

The Time Mage Elder Kirin visits Seto aboard The Valiant. He tells him he must release Ortega immediately, speaks to him about the Gods, and tips him off about the location of Ifrit.

The gang arrives at the cave and Leo has been perched like a predator staking for prey. He squares off with Wedge (two spear enthusiasts). Casius has hung back to make sure Wedge is alright, and he can see that Leo is toying with him like a cat with a mouse pinned to the floor. He interferes, disarming Leo, breaking his nose, left wrist, and right arm at the elbow before throwing him over a cliff into water below.

The group retrieves Ifrit and Phoenix successfully. Meanwhile Members of the Chimera Initiative recover Siren and Leviathan.

While searching for the resting place of Titan, the group encounters a behemoth and together they kill the beast in an intense battle. Everyone survives but Casius is worn out and beat up. (Titan and Golem are retrieved)

Seto heads for Gallik Baal'a, a small continent to the far north (a place Ana made him promise not to go before she died). To search a legendary cursed shrine.

Stryker has Cid head for Juakeem and Shae'elle, and Banion afterward. Stryker brings the rest of the group together again.     Final Fantasy VI
Seto searches an ancient ruin with a torch, coming upon an ebony skull in a pool of water. He recovers Diablos and Doomtrain.

The gang encounter the Chimera Initiative in an ancient temple compound and fight there way out with Sylph and Typhoon.

The gang reunites with Seto. Elder Kirin summons him by teleporting him to Mount Ramuh in front of the members of his high council. The oldest and most powerful of Time Mages; thousands of years old. His back is to them with Kirin before him. He cannot see or hear them, but he can feel their eyes on him, their presence and judgement. It brings him to tears, dropping to his knees. Kirin tells him the recently acquired deities are dangerous and must be returned to Gallik Baal'a immediately. He takes him to a secluded area of the wilderness, comforting him, but telling him that the Time Stream was broken by a curious young Time Mage after reading a book from the library of world culture and history. He stole a very specific ring and used it to travel back in time in an attempt to change the course of events. He saved Seto's life, hoping he would save Ana's. A man was sent to capture him and return him to his own place the Time Stream. During the process, he stopped Seto from potentially altering the natural course of events. He gives him eight enchanted rings that cast protect on their wearer. He asks him what he thinks the Mirror showed Ana. Why she changed from that moment forward. Seto begins to put all the little clues together. Kirin says simply, "Keep fighting," and tells him to find The Tomb of Alexander.

Seto is returned to the group and he emotionally reveals to the group that the Mirror showed Ana the future, even after her death. They gather to deal with the summons; who will be charged with what deities.

Leo has uncovered Cerberus and unleashes it on a village of fisherman.

Tybin and Sabin Marks are hired by Sarovoc to deal with Cid and his band of misfit strays. Sabin Marks shoots The Valiant out of the sky. Leo unleashes Cerberus and engages himself. His hellhound devours crew members as they evacuate the grounded vessel and Biggs engages Leo having Titan occupy Cerberus. Leo kills Biggs, mutilating and disgracing his corpse, kicking it around. Using wind magic from Sylph and Typhoon and his Qiang training, Wedge clashes with Leo in a heated duel. The heroes gain control and drive off the Initiative and the Marks brothers.

Reinforcements arrive and the ship is fixed up. Biggs is buried next to Ana at the Mana Tree.

The group heads for The Tomb of Alexander. But Seto rushes ahead of the party while they take twenty to prepare. While entering the musty stairwell he is ambushed by Sarovoc and shot three times in the torso. Priestess Talla has had a vision of Sarovoc obtaining Alexander & Crusader, and through it, Seto's death. She rushes by chocobo to his aid, reviving him before it's too late.

Zenobia invades Whiteguard with force. Sarovoc marches into the High Temple and cuts Talla's throat when she tells him that Seto will come for him.

Marrick betrays the gang, snatching Seto's pendant and Bahamut along with it. He's cut a deal with Sarovoc to spare his people.       Final Fantasy VII   Marrick serves Sarovoc in attempting to use Bahamut to create a weapon capable of protecting Zenobia from the Gods by killing them. The pendant remains useless to them and Sarovoc grows angry with Marrick.

Kirin once again visits Seto aboard the Valiant. He tells him of one who may be able to defeat Alexander. Odin and Gilgamesh become the group's next move. He also gives him Shoat, an odd Lesser Diety giving him access to low level Time Magic.

At High Rock Plateau, the group retrieves Odin & Gilgamesh and Lilly squares off with Leo, telling Cid to go and meet her at a nearby rebel base. She kills Leo in a fierce, intense battle. As he lays bleeding out, she hesitantly removes his mask revealing a baby faced young man. A single tear rolls down his cheek and she realizes they were not so different. The last of their kind, victims of Zenobia.

Seto forces Cid to go back for Lilly.

When Lyra finds out Leo was killed, she goes on a war path.

She hunts the heroes down at a rebel base, and Zenobians hit it hard. Lyra faces off with Lilly in a blind rage, shooting her in the stomach before using her scimitar to have fun with her. Lilly digs deep and kills Lyra, freeing her from her torment, and putting an end to the Shinjan race.

Sarovoc uses Alexander to influence the people of Whiteguard, claiming to be a child of the Gods and their chosen ruler. He aims to ensure the easy allegiance of Whiteguard to Zenobia and the Ducrinus name forever. Marrick discovers this and tries to kill Sarovoc. He fails and Sarovoc kills him.

A movement begins to muster forces from around the world to attack the city of Zenobia. Banion gathers warrior monks by the hundreds. Juakeem summons his brethren and their war birds. Bastian and the resistance amass the largest rebel force ever assembled. Men and women from all corners of the world answer the call. Casius Magnus returns to The Veldt and does his best to sway the council, but they send him away, refusing to involve themselves. Word spreads and a group of several hundred follow Casius, becoming exiles to fight the Empire.

The forces muster around Zenobia. Casius leads a preemptive strike with a savage blitz. He summons Doomtrain, which runs through Zenobia repeatedly, leaving devastation and rusted railroad tracks in its wake. He summons a flurry of meteors that ravage the core of the metropolis. Ifrit scorches hundreds of men, cooking them alive as Casius and his warriors unleash upon the masses. Casius casts himself dry, his reflect spell falling as hundreds of bullets rip through him...

The organized forces stand in anticipation, watching the show from afar. Sarovoc is enraged by the devastation and unleashes his summons upon the opposing masses. The heroes respond and the rebel forces charge the Zenobian lines. A titanic duel of Gods is unfolding in the sand around, sky above, and in the streets of Zenobia as an epic, bloody conflict plays out in the sands outside the city.

Seto rides Boko into the battle alongside Juakeem and Shae'elle. Sarovoc releases Alexander upon the rebel soldiers. He obliterates masses of men before Seto sends Gilgamesh to keep him busy. An epic fight ensues between the two. Seto finds Sarovoc and squares off with him. He kills him, ending the fight between Alexander and Gilgamesh before either would claim victory.

Far away in the wilderness of the Sobral territory, Henato Jr seeks advice from his father. He tells him that he holds a terrible responsibility. Fighting back tears, he tells him he will never know how sorry he is that it had to be him, but only he can summon Bahamut ZERO.

Henato Jr is torn. He understands that Zenobia must be defeated, but he also understands the weight of what it will mean.

As Sarovoc lays dying, the cloudy sky rips open as a massive onyx dragon penetrates the atmosphere. It hits the core of Zenobia with a Tetra Flare roughly equivalent to a 50 megaton nuclear warhead.Zenobia is levelled, only portions of the outermost slums left intact. Seto pulls his pendant from around Sarovoc's neck in the foggy aftermath. Zenobians lay down their arms as word of the unholy event spreads and soldiers return to behold the ruins of their Capitol.

Advanced weapons of any kind are outlawed and destroyed all over Zenobia's reach and rebel holds and cultures. Universal peace is an ideal adopted by the major nations and enforced harshly. New Zenobia is established out of the ruins, and a new government and ideology based on Ana's beliefs is adopted. Stryker is pivotal in the rebuilding of Zenobia, and afterward travels the world spreading Ana's messages as he writes a book titled The Epic of Anastasia chronicling her journey and the resulting revolution and defeat of the Empire. He becomes an ambassador of peace, equality, and New Zenobia.

Casius and Marrick are buried with Ana and Biggs at the Mana Tree...    

That One NPC

That One NPC

 

Bahamir

Bahamir     After the Great War the Bahamir Dynasty had been decimated. The Essences had been gathered, but the Bahamuts were permitted to remain in he hands of the Bahamir people. Simply "hiding" them anywhere was too risky. Rather than establish a kingdom to rebuild, they chose to once again become tribal, retreating deep into the highland wilderness near the base of Mount Ramuh. They would remain hidden away, protecting the Essences of the three Bahamuts. They divided into three core tribes. The central tribe would house the old royal family, and a portion of the nobles and common classes. But it would also house the Bahamuts, kept in the possession of the royal bloodline. They have remained in the highlands for two long eras.        Henato Senior   King of the Bahamir people, Henato grows old, and must prepare his sons to inherit his kingdom. Henato Jr, Sedato and Seto are his three boys from oldest to youngest.   Henato Jr. is the ideal Bahamir Prince. Strong, hearty, honorable and humane. He is a folk hero of the people and somewhat of a celebrity among young boys and females of all ages. He possesses the pendant containing the essence of Bahamut ZERO.   Sedato remains close behind Henato Jr., following in his footsteps, led by his example and influence. He and his older brother live like rock stars of the Dragoon Kingdom, doing their best to retain their honor and dignity all the while. Sedato possess the pendant containing the essence of Neo Bahamut.
Seto is the runt of the litter. He struggles to conform to the pressures and expectations of a young Dragoon Prince. He's not overly strong, smart, brave, or wise. He develops anxiety and insecurities by the time he reaches his teen years as a result of being a member of the royal family. Unsure of himself, his life, or his ability to be a Prince, Seto leaves his people, fleeing into the wilderness to escape the weight of an entire kingdom resting on his shoulders. With him travels the pendant containing the essence of Bahamut.     Seto    A young Seto travels the world, never finding a place he could fit in. A place he could call home. He learns to become a phantom, never staying in one place for too long. But still, he cannot resist the ways of his people, lending a hand to help anywhere he can. By his early twenties, he has an understanding of the realm of Odinspawn and what makes their cultures tick. He blends into their routines, but remains as a ghost. During his time spent in the Slums of Zenobia, he meets a man by the name of Edge Widowmaker. An ex-Zenobian soldier and renowned outlaw criminal. Seto draws strength from his persona, taking the name Edge in place of his own after witnessing the renegade's death during a gun fight. Using the name Edge garners fear and respect beyond the sands of Zenobia, where the name holds weight, but few knew the man personally.   Seto continues to use the moniker during his travels. Through the use of said moniker, Seto is introduced to the Peoples Rebelion of Zenobia. An underground network of anti-Zenobian cells stretching across the Empire's reach. It is here that Seto finally finds a cause he call his own. Something to fight for.   During a lone wolf attempt to confront and kill Sarovoc, Seto has a chance encounter with a young, independent freedom fighter by the name of Anastasia Maria Grace.

That One NPC

That One NPC

 

Loose Leaf Sprinting Template (Male)

Some of you may recall the sprinting template I was working on for Mack's Loose Leaf sprite generating resources. It is now complete, and it's looking good when tested at running speeds. At walking speed it looks very choppy as one would expect it to.   Here is a sample of a sprite I took and made a sprint sheet for. I'm not to sit here and tell you it's a quick cut & paste job. It takes some work to adjust certain parts to match the new limb positioning, but I have some tips to make it much smoother.       This is the template. On the left you have the standard walking sheet, on the right is the sprinting edit.   I suggest assembling your walking sprite as you normally would using any image editor like Paint.net, for example. Keep the project file with all layers separated. You'll use this to transfer layers one by one to the sprint template to edit and position them to your liking, retaining all edits and color selections during this process. So the clothing that you apply to your sprint template, is the exact same as the clothing on your walking template.   This makes it much easier to create the running version of your sprite. I suggest starting with the pants and feet, working up to the body. I actually separated the sleeves from the body of my sprites undershirt to make that process easier and a bit more painless.   Credit only to Mack. I don't share my edits for personal credit.    Enjoy! Don't be shy reporting in errors/bugs.

That One NPC

That One NPC

 

FF8-Based Triad Deck

New year. New deck.     Credits: Raizen CLOSET Kas Enterbrain   All card values are based on the deck from Final Fantasy VIII.     Card Back   Icons     Rarity 1 Common         Rarity 2 Uncommon         Rarity 3 Rare       Rarity 4 Epic         Rarity 5 Legendary             Credits: Raizen CLOSET Kas Enterbrain     All card values are based on the deck from Final Fantasy VIII.     The Card Settings file will be ready soon. I have to hammer some bugs out of my demo before I can even test it properly. My card album as well as pretty much all triad scenes won't work at present, and I'm not sure why. I will be working on getting those settings tested this weekend. The file is drafted, but yet to be checked for errors.      

That One NPC

That One NPC

 

tutorial TUTORIAL: Moving background for trains and trams

In this tutorial, I'll be teaching you how to make a moving train/tram/other vehicle/etc. without much work! This is NOT the same as a vehicle event. This is to get the effect of movement in the background.   This is something I decided to do in my remake of the first Half-Life game, for the first chapter, Black Mesa Inbound.   WHAT YOU'LL NEED:
RPG Maker (I use VX Ace, but I'm sure it works in any of them, given you have access to something like the below script.)
HimeWorks' Map Screenshot script (or similar)   First, make your train or tram.
  The blank background behind it is extremely important.   Next, you need to make the scenery you want to have the appearance of moving - I.E: another map. A really, really long map. Once you make it, use the script to take a mapshot.
Copy your mapshot into this folder in the Graphics folder of your project...:
  ...And then go back to your tram/train map and put your mapshot as a Parallax Background in your Map Properties. For Black Mesa Inbound, I don't need it to move very fast, so I have it on these settings: Then time it, calculate the time in frames and have the map change to the "dropoff location" (Parallel Process event). In the case of my Half-Life remake, that would be a still map where a guard walks over and lets you out, then leads you to the next chapter.   What if you have to use multiple tilesets to get the effect you want? (I.E - simulating going through numerous locations) I had that same question for myself and it was an easy enough answer - make multiple maps, take multiple mapshots and splice them all together into one image. I'm having to do this for Black Mesa Inbound.   What if I want background characters to move on the background instead of standing in one place? I don't have an answer for this one. I considered having the characters all be set to "Below" and "Through" and sort of shimmy in place before being set to Transparent, but that didn't quite work. In the end, having them standing there looks populated enough for my taste. If anybody has an answer to this, please comment it down below!     Anyway, I hope you find this tutorial useful in your development endeavours!

AutumnAbsinthe

AutumnAbsinthe

New Deck

I sat down last week and decided to finally fix my broken, messy CLOSET Triad deck. I had made my own deck and the card values were just a total mess. This time I copied the values from all 110 of FF8's cards, and made a replica deck that is guaranteed to be playable. I still need to program and test the card settings file, but that should be done by the new year.     New booster icon edits.   Common   Uncommon   Rare   Epic   Legendary       Some new cards have been added, some old ones were removed. The values of two cards were slightly altered, but it shouldn't impact the balancing of the deck.  

That One NPC

That One NPC

 

Why deciding when to refactor can be complicated and convoluted

Let's imagine that the job of a harvester is to use an axe to harvest trees, and the axe will deteriorate over time. Assuming that the following's the expected performance of the axe: Fully sharp axe(extremely excellent effectiveness and efficiency; ideal defect rates) - 1 tree cut / hour 1 / 20 chance for the tree being cut to be defective(with 0 extra decent tree to be cut for compensation as compensating trees due to negligible damages caused by defects) Expected number of normal trees / tree cut = (20 - 1 = 19) / 20 Becomes a somehow sharp axe after 20 trees cut(a fully sharp axe will become a somehow sharp axe rather quickly) Somehow sharp axe(reasonably high effectiveness and efficiency; acceptable defect rates) - 1 tree cut / 2 hours 1 / 15 chance for the tree being cut to be defective(with 1 extra decent tree to be cut for compensation as compensating trees due to nontrivial but small damages caused by defects) Expected number of normal trees / tree cut = (15 - 1 - 1 = 13) / 15 Becomes a somehow dull axe after 80 trees cut(a somehow sharp axe will usually be much more resistant on having its sharpness reduced per tree cut than that of a fully sharp axe) Needs 36 hours of sharpening to become a fully sharp axe(no trees cut during the atomic process) Somehow dull axe(barely tolerable effectiveness and efficiency; alarming defect rates) - 1 tree cut / 4 hours 1 / 10 chance for the tree being cut to be defective(with 2 extra decent trees to be cut for compensation as compensating trees due to moderate but manageable damages caused by defects) Expected number of normal trees / tree cut = (10 - 1 - 2 = 7) / 10 Becomes a fully dull axe after 40 trees cut(a somehow dull axe is just ineffective and inefficient but a fully dull axe is significantly dangerous to use when cutting trees) Needs 12 hours of sharpening to become a somehow sharp axe(no trees cut during the atomic process) Fully dull axe(ridiculously poor effectiveness and efficiency; obscene defect rates) - 1 tree cut / 8 hours 1 / 5 chance for the tree being cut to be defective(with 3 extra decent trees to be cut for compensation as compensating trees due to severe but partially recoverable damages caused by defects) Expected number of normal trees / tree cut = (5 - 1 - 3 = 1) / 5 Becomes an irreversibly broken axe(way beyond repair) after 160 trees cut The harvester will resign if the axe keep being fully dull for 320 hours(no one will be willing to work that dangerously forever) Needs 24 hours of sharpening to become a somehow dull axe(no trees cut during the atomic process)   Now, let's try to come up with some possible work schedules: Sharpens the axe to be fully sharp as soon as it becomes somehow sharp - Expected to have 19 normal trees and 1 defective tree cut after 1 * (19 + 1) = 20 hours(simplifying "1 / 20 chance for the tree being cut to be defective" to be "1 defective tree / 20 trees cut") Expected the axe to become somehow sharp now, and become fully sharp again after 48 hours Expected long term throughput to be 19 normal trees / (20 + 36 = 56) hours(around 33.9%) Sharpens the axe to be somehow sharp as soon as it becomes somehow dull - The initial phase of having the axe being fully sharp's skipped as it won't be repeated Expected to have 68 normal trees, 6 defective trees, and 6 compensating trees cut after 2 * (68 + 6 + 6) = 160 hours(simplifying "1 / 15 chance for the tree being cut to be defective" to be "1 defective tree / 15 trees cut" and using the worst case) Expected the axe to become somehow dull now, and become somehow sharp again after 12 hours Expected long term throughput to be 68 normal trees / (160 + 12 = 172) hours(around 39.5%) Sharpens the axe to be somehow dull as soon as it becomes fully dull - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + 8) = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") Expected the axe to become fully dull now, and become somehow dull again after 24 hours Expected long term throughput to be 28 normal trees / (160 + 24 = 184) hours(around 15.2%) Sharpens the axe to be somehow dull right before the harvester will resign - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + 8) = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") when the axe's somehow dull Expected the axe to become fully dull now, and expected to have 4 normal trees, 8 defective trees, and 24 compensating trees but after 8 * (4 + 8 + 24) = 288 hours(simplifying "1 / 5 chance for the tree being cut to be defective" to be "1 defective tree / 5 trees cut" and using the worst case) when the axe's fully dull Expected total number of normal trees to be 28 + 4 = 32 Expected the axe to become somehow dull again after 24 hours(so the axe remained fully dull for 288 + 24 = 312 hours, which is the maximum before the harvester will resign) Expected long term throughput to be 32 normal trees / (160 + 312 = 472) hours(around 6.7%) Sharpens the axe to be fully sharp as soon as it becomes somehow dull - Expected total number of normal trees to be 19 + 68 = 87 Expected total number of hours to be 56 + 172 = 228 hours Expected long term throughput to be 87 normal trees / 228 hours(around 38.2%) Sharpens the axe to be fully sharp as soon as it becomes fully dull - Expected total number of normal trees to be 19 + 68 + 28 = 115 Expected total number of hours to be 56 + 172 + 184 = 412 hours Expected long term throughput to be 115 normal trees / 412 hours(around 27.9%) Sharpens the axe to be fully sharp right before the harvester will resign - Expected total number of normal trees to be 19 + 68 + 32 = 119 Expected total number of hours to be 56 + 172 + 472 = 700 hours Expected long term throughput to be 119 normal trees / 700 hours(17%) Sharpens the axe to be somehow sharp as soon as it becomes fully dull - Expected total number of normal trees to be 68 + 28 = 96 Expected total number of hours to be 172 + 184 = 356 hours Expected long term throughput to be 96 normal trees / 356 hours(around 26.9%) Sharpens the axe to be somehow sharp right before the harvester will resign - Expected total number of normal trees to be 68 + 32 = 100 Expected total number of hours to be 172 + 472 = 644 hours Expected long term throughput to be 100 normal trees / 644 hours(around 15.5%)   So, while these work schedules clearly show that sharpening the axe's important to maintain effective and efficient long term throughput, trying to keep it to be always fully sharp is certainly going overboard(33.9% throughput), when being somehow sharp is already enough(39.5% throughput). Then why some bosses don't let the harvester sharpen the axe even when it's somehow or even fully dull? Because sometimes, a certain amount of normal trees have to be acquired within a set amount of time. Let's say that the axe has become from fully sharp to just somehow dull, so there should be 87 normal trees cut after 180 hours, netting the short term throughput of 48.3%. But then some emergencies just come, and 3 extra normal trees need to be delivered within 16 hours no matter what, whereas compensating trees can be delivered later in the case of having defective trees. In this case, there won't be enough time to sharpen the axe to be even just somehow sharp, because even in the best case, it'd cost 12 + 2 * 3 = 18 hours. On the other hand, even if there's 1 defective tree from using the somehow dull axe within that 16 hours, the harvester will still barely make it on time, because the chance of having 2 defective trees is (1 / 10) ^ 2 = 1 / 100, which is low enough to be neglected for now, and as compensatory trees can be delivered later even if there's 1 defective tree, the harvester will be able to deliver 3 normal trees. In reality, crunch modes like this will happen occasionally, and most harvesters will likely understand that it's probably inevitable eventually, so as long as these crunch modes won't last for too long, it's still practical to work under such circumstances once in a while, because it's just being reasonably pragmatic.   However, in supposedly exceptional cases, the situation's so extreme that, when the harvester's about to sharpen the axe, the boss constantly requests that another tree must be acquired as soon as possible, causing the harvester to never have time to sharpen the axe for a long time, thus having to work more and more ineffectively and inefficiently in the long term. In the case of a somehow dull axe, 12 hours are needed to sharpen it to be somehow sharp, whereas another tree's expected to be acquired within 4 hours, because the chance of having a defective tree cut is 1 / 10, which can be considered small enough to take the risk, and the expected number of normal trees over all trees being cut is 7 of out 10 for a somehow dull axe, whereas 12 hours is enough to cut 3 trees by using such an axe, so at least 2 normal trees can be expected within this period. If this continues, eventually the axe will become fully dull, and 24 hours will be needed to sharpen it to be somehow dull, whereas another tree's expected to be acquired within 8 hours, because the chance of having a defective tree is 1 / 5, which can still be considered controllable to take the risk, especially with an optimistic estimation. While the expected number of normal trees over all trees being cut is 1 of out 5 for a fully dull axe, whereas 24 hours is just enough to cut 3 trees by using such an axe, meaning that the harvester's not expected to make it normally, in practice, the boss will usually unknowingly apply optimism bias(at least until it no longer works) by thinking that there will be no defective trees when just another tree's to be cut, so the harvester will still be forced to continue cutting trees, despite the fact that the axe should be sharpened as soon as possible even when just considering the short term. Also, if the boss can readily replace the current harvester with a new one immediately, the boss will rather let the current harvester resign than letting that harvester sharpening the axe to be at least somehow dull, because to the boss, it's always emergencies after emergencies, meaning that the short term's constantly so dire that there's just no room to even consider the long term at all. But why such an undesirable situation will be reached? Other than extreme and rare misfortunes, it's usually due to overly optimistic work schedules not seriously taking the existence of defective and compensatory trees, and the importance of the sharpness of the axe and the need of sharpening the axe into the account, meaning that such unrealistic work schedules are essentially linear(e.g.: if one can cut 10 trees on day one, then he/she can cut 1000 trees on day 100), which is obviously simplistic to the extreme. Occasionally, it can also be because of the inherent risks of sharpening the axe - Sometimes the axe won't be actually sharpened after spending 12, 24 or 36 hours, and while it's extraordinary, the axe might be actually even more dull than before, and most importantly, usually the boss can't directly judge the sharpness of the axe, meaning that it's generally hard for that boss to judge the ROI of sharpening the axe with various sharpness before sharpening, and it's only normal for the boss to distrust what can't be measured objectively by him/herself(on the other hand, normal, defective and compensatory trees are objectively measurable, so the boss will of course emphasize on these KPIs), especially for those having been opting for linear thinking.   Of course, the whole axe cutting tree model is highly simplified, at least because: The axe sharpness deterioration isn't a step-wise function(an axe becomes from having a discrete level of sharpness to another such level after cutting a set number of trees), but rather a continuous one(gradual degrading over time) with some variations on the number of trees cut, meaning that when to sharpen the axe in the real world isn't as clear cut as that in the aforementioned model(usually it's when the harvester starts feeling the pain, ineffectiveness and inefficiency of using the axe due to unsatisfactory sharpness, and these feeling has last for a while) Not all normal trees are equal, not all defective trees are equal, and not all compensatory trees are equal(these complications are intentionally simplified in this model because these complexities are hardly measurable) The whole model doesn't take the morale of the harvester into account, except the obvious one that that harvester will resign for using a fully dull axe for too long(but the importance of sharpening the axe will only increase will morale has to be considered as well) In some cases, even when the axe's not fully dull, it's already impossible to sharpen it to be fully or even just somehow sharp(and in really extreme cases, the whole axe can just suddenly break altogether for no apparent reason) Nevertheless, this model should still serve its purpose of making this point across - There's isn't always an universal answer to when to sharpen the axe to reach which level of sharpness, because these questions involve calculations of concrete details(including those critical parts that can't be quantified) on a case-by-case basis, but the point remains that the importance of sharpening the axe should never be underestimated.   When it comes to professional software engineering: The normal trees are like needed features that work well enough The defective trees are like nontrivial bugs that must be fixed as soon as possible(In general, the worse the code quality of the codebase is, the higher the chance to produce more bugs, produce bugs being more severe, and the more the time's needed to fix each bug with the same severity - More severe bugs generally cost more efforts to fix in the same codebase) The compensatory trees are like extra outputs for fixing those bugs and repairing the damages caused by them The axe is like the codebase that's supposed to deliver the needed features(actually, the axe can also be like those software engineers themselves, when the topic involved is software engineering team management rather than just refactoring) Sharpening the axe is like refactoring(or in the case of the axe referring to software engineers, sharpening the axe can be like letting them to have some vacations to recover from burnouts) A fully sharp axe is like a codebase suffering from the gold plating anti pattern on the code quality aspect(diminishing returns applies to code qualities as well), as if those professional software engineers can't even withstand a tiny amount of technical debt. On the good side, such an ideal codebase is the most unlikely to produce nontrivial bugs, and even when it does, they're most likely fixed with almost no extra efforts needed, because they're usually found way before going into production, and the test suite will point straight to their root causes. A somehow sharp axe is like a codebase with more than satisfactory code qualities, but not to the point of investing too much on this regard(and the technical debt is still doing more good than harm due to its amount under moderation). Such a practically good codebase is still a bit unlikely to produce nontrivial bugs regularly, but it does have a small chance to let some of them leak into production, causing a mild amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A somehow dull axe is like a codebase with undesirable code qualities, but it's still an indeed workable codebase(although it's still quite painful to work with) with a worrying yet payable amount of technical debt. Undesirable yet working codebases like this probably has a significant chance to produce nontrivial bugs frequently, and a significant chance for quite some of them to leak into production, causing a rather significant amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A fully dull axe is like a unworkable codebase where it must be refactored as soon as possible, because even senior professional software engineers can easily create more severe bugs than needed features with such a codebase(actually they'll be more and more inclined to rewrite the codebase the longer it's not refactored), causing their productivity to be even negative in the worst cases. An effectively broken codebase like this is guaranteed to has a huge chance to produce nontrivial bugs all the time, and nearly all of them will leak into production, causing an insane amount of extra efforts to be needed to fix the bugs and repair the damages caused by them(so the professionals will be always fixing bugs instead of delivering features), provided that these recovery moves can be successful at all. A broken axe is like a codebase being totally technical bankrupt, where the only way out is to completely rewrite the whole thing from scratch, because no one can fathom a thing in that codebase at that point, and sticking to such a codebase is undoubtedly a sunk cost fallacy. While a codebase with overly ideal code qualities can deliver the needed features in the most effective and efficient ways possible as long as the codebase remains in this state, in practice the codebase will quickly degrade from such an ideal state to a more practical state where the code qualities are still high(on the other hand, going back to this state is very costly in general no matter how effective and efficient the refactoring is), because this state is essentially mysophobia in terms of code qualities. On the other hand, a codebase with reasonably high code qualities can be rather resistant from code quality deterioration(but far from 100% resistant of course), especially when the professional software engineers are disciplined, experienced and qualified, because degrading code qualities for such codebases are normally due to quick but dirty hacks, which shouldn't be frequently needed for senior professional software engineers. To summarize, a senior professional software engineer should strive to keep the codebase to have a reasonably high code quality, but not to the point of not even having good technical debts, and when the codebase has eventually degraded to have just barely tolerable code quality, it's time to refactor it to become having very satisfactory, but not overly ideal, code quality again, except in the case of occasional crunch modes, where even a disciplined, experienced and qualified expert will have to get the hands dirty once in a while on the still workable codebase but with temporarily unacceptable code quality, just that such crunch modes should be ended as soon as possible, which should be feasible with a well-established work schedule.

DoubleX

DoubleX

 

My Predictions Of The Future Multiplayer Game Architectures

The following image briefly outlines the core structure of this whole idea, which is based on the idea of applying purely server-side rendering on games: Note that the client side should have next to no game state or data, nor audio/visual assets, as they're supposed to never leave the server side. The following's the general flow of games using this architecture(all these happen per frame): 1. The players start running the game with the client IO 2. The players setup input configurations(keyboard mapping, mouse sensitivity, mouse acceleration, etc), graphics configurations(resolution, fps, gamma, etc), client configurations(player name, player skin, other preferences not impacting gameplay, etc), and anything that only the players can have information of 3. The players connect to servers 4. The players send all those configurations and settings to the servers(those details will be sent again if players changed them during the game within the same servers) 5. The players makes raw inputs(like keyboard presses, mouse clicks, etc) as they play the game 6. The client IO captures those raw player inputs and sends them to the server IO(but there's never any game data/state synchronization among them) 7. The server IO combines those raw player inputs and the player input configurations for each player to form commands that the game can understand 8. Those game commands generated by all players in the server will update the current game state set 9. The game polls the updated current game state set to form the new camera data for each player 10. The game combines the camera data with the player graphics configurations to generate the rendered graphics markups(with all relevant audio/visual assets used entirely in this step) which are highly compressed and obfuscated and have the least amount of game state information possible 11. The server IO captures the rendered graphics markups and send them to the client IO of each player(and nothing else will ever be sent in this direction) 12. The client IO draws the fully rendered graphics markups(without needing nor knowing any audio/visual asset) on the game screen visible by each player The aforementioned flow can also be represented this way:   The advantages of this architecture at least include the following: 1. The game requirements on the client side can be a lot lower than the traditional architecture, as now all the client side does is sending the captured raw player inputs(keyboard presses, mouse clicks, etc) to the server side, and draws the received rendered graphics markup(without using any audio/visual assets in this step and the client side doesn't have any of them anyway) on the game screen visible by each player 2. Cheating will become next to impossible, as all cheats are based on game information, and even the state of the art machine vision still can't retrieve all the information needed for cheating within a frame(even if it just needs 0.5 seconds to do so, it's already too late in the case of professional FPS E-Sports, not to mention that the rendered graphics markup can change per frame, making machine vision even harder to work well there), and it'd be a epoch-making breakthrough on machine vision if the cheats can indeed generate the correct raw player inputs per frame(especially when the rendered graphics markups are highly obfuscated), which is definitely doing way more good than harm to the mankind, so games using this architecture can actually help pushing the machine vision researches. 3. Game piracy and plagiarisms will become a lot more costly and difficult, as the majority of the game contents and files never leave the servers, meaning that those servers will have to be hacked first before those pirates can crack those games, and hacking a server with the very top-notch security(perhaps monitored by network and server security experts as well) is a very serious business that not many will even have a chance 4. Game data and state synchronization should no longer be an issue, because the client side should've nearly no game data and state, meaning that there's should be nothing to synchronize with, thus this setup not only removes tons of game data/state integrity troubles and network issues, but also deliberate or accidental exploits like lag switching(so servers no longer has to kick players with legitimately high latency because those players won't have any advantage anymore, due to the fact that such exploits would just cause the users to become inactive for a very short time per lag in the server, thus they'd be the only ones being under disadvantages)   The disadvantages of this architecture at least include the following: 1. The game requirements on the server side will become ridiculous - perhaps a supercomputer, computer cluster, or a computer cloud will be needed for each server, and I just don't know how it'll even be feasible for MMO to use this architecture in the foreseeable future 2. The network traffic in this architecture will be absurdly high, because all players are sending raw input to the same server, which sends back the rendered graphics markup to each player(even though it's already highly compressed), all happening per frame, meaning that this can lead to serious connection issues with servers having low capacity and/or players with low connection speed/limited network data usage 3. The maintenance cost of the games on the business side will be a lot higher, because the servers need to be much, much more powerful than those running games not using this architecture 4. Because the players are supposed to send raw inputs per frame, and there will be limits of the number of packets to be sent to the server per second, it means that either the game tick rate on the server will be capped by the lowest network packet sent rate among all players(otherwise they'd behave like not inputting anything once every several frames), or some kind of input synchronization mechanisms will be needed between the server IO and the server game state set(but it's still a much lesser evil than synchronizing game data/states between the client and server sides)   Clearly, the advantages from this architecture will be unprecedented if the architecture itself can ever be realized, while its disadvantages are all hardware limitations that will become less and less significant, and will eventually becomes trivial. So while this architecture won't be the reality in the foreseeable future(at least several years from now), I still believe that it'll be the distant future(probably in terms of decades).   If this architecture becomes the practical mainstream, the following will be at least some of the implications: 1. The direct one time price of the games, and also the indirect one(the need to upgrade the client machine to play those games) will be noticeably lower, as the games are much less demanding on the client side(drawing an already rendered graphics markup, especially without needing any audio nor visual assets, is generally a much, much easier, simpler and smaller task than generating that markup itself, and the client side hosts almost no game data nor state so the hard disk space and memory required will also be a lot lower) 2. The periodic subscription fee will exist in more and more games, and those already having such fee will likely increase the fee, in order to compensate for the increasing game maintenance cost from upgraded servers(these maintenance cost increments will eventually be cancelled out by hardware improvements causing the same hardware to become cheaper and cheaper) 3. The focus of companies previously making high end client CPU, GPU, RAM, hard disk, motherboard, etc will gradually shift their business into making server counterparts, because the demands of high end hardware will be relatively smaller and smaller on the client side, but will be relatively larger and larger on the server side 4. The demands of high end servers will be higher and higher, not just from game companies, but also for some players investing a lot into those games, because they'd have the incentive to build some such servers themselves, then either use them to host some games, or rent those servers to others who do   In the case of highly competitive E-Sports, the server can even implement some kind of fuzzy logic, which is fine-tuned with a deep learning AI, to help report suspicious raw player input sets(consisted of keyboard presses, mouse clicks, etc) with a rating on how suspicious it is, which can be further broken down to more detailed components on why they're that suspicious. This can only be done effectively and efficiently if the server has direct access to the raw player input set, which is one of the cornerstones of this very architecture. Combining this with traditional anti cheat measures, like having a server with the highest security level, an in-game admin having server level access to monitor all players in the server(now with the aid of the AI reporting suspicious raw player input sets for each player), another admin for each team/side to monitor player activities, a camera for each player, and thoroughly inspected player hardware, it'll not only make cheating next to impossible in major LAN events(also being cut off from external connections), but also so obviously infeasible and unrealistic that almost everyone will agree that cheating is indeed nearly impossible there, thus drastically increasing their confidence on the match fairness.   Of course, games can also use a hybrid model, and this especially applies to multiplayer games also having single player modes. If the games support single player, of course the client side needs to have everything(and the piracy/plagiarism issues will be back), it's just that most of them won't be used in multiplayer if this architecture's used. If the games runs on the multiplayer, the hosting server can choose(before hosting the game) whether this architecture's used(of course, only players with the full client side package can join servers using the traditional counterpart, and only players with the server side subscription can join servers using this architecture). Alternatively, players can choose to play single player modes with a server for each player, and those servers are provided by the game company, causing players to be able to play otherwise extremely demanding games with a low-end machine(of course the players will need to apply for the periodic subscriptions to have access of this kind of single player modes). On the business side, it means such games will have a client side package, with a one time price for everything in the client side, and a server side package, with a periodic subscription for being able to play multiplayer, and single player with a dedicated server provided, then the players can buy either one, or both, depending on their needs and wants.s. This hybrid model, if both technically and economically feasible, is perhaps the best model I can think of.

DoubleX

DoubleX

 

How Information Density And Volume Affect Codebase Readability

Abbreviations HID - High Information Density LID - Low Information Density HIV - High Information Volume LIV - Low Information Volume HID/HIV - Those who can handle both HID and HIV well HID/LIV - Those who can handle HID well but can only handle LIV well LID/HIV - Those who can only handle LID well but can handle HIV well LID/LIV - Those who can only handle LID and LIV well   TL;DR(The Whole Article Takes About 30 Minutes To Read In Full Depth) Information Density A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. Unfortunately, different programmers have different capacities on facing information density. In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard. Information Volume A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. Unfortunately, different software engineers have different capacities on facing information volume, so it seems that the best way's to find a happy medium that can break a very long function into fathomable chunks on one hand, while still keeping the function call stack manageable on the other. In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard. Combining Information Density With Information Volume While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep Teams With Programmers Having Different Styles It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective.   Information Density A Math Analogy Consider the following math formula that are likely learnt in high school(Euler's Formula): Most of those who've studied high school math well should immediately fathom this, but for those who don't, you may want to try to fathom this text equivalent, which is more verbose: I hope that those who can't fathom the above formula can at least fathom the above text :) This brings the importance of information density: A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. For instance, the above formula has HID whereas the above text has LID. In this example, those who're good at math in general and high school math in particular will likely prefer the formula over the text equivalent as they can probably fathom the former instantly while feeling that the latter's just wasting their time; Those who're bad at math in general and high school math in particular will likely prefer the text equivalent over the formula as they might not even know the fact that cisx is the short form of cosx + isinx. For those who can handle HID well, even if they don't know what Euler number is at all, they should still be able to deduce these corollaries within minutes if they know what cisx is: But for those who can only handle LID well, they'll unlikely be able to know what's going on at all, even if they know how to use the binomial theorem and the truncation operator. Now let's try to fathom this math formula that can be fathomed using just high school math: While it doesn't involve as much math knowledge nor concepts as those in the Euler's Formula, I'd guess that only those who're really, really exceptional in high school math and math in general can fathom this within seconds, let alone instantly, all because of this formula having such a ridiculously HID. If you can really fathom this instantly, then I'd think that you can really handle very HID very well, especially when it comes to math :D So what if we try to explain this by text? I'd come up with the following try: Maybe you can finally fathom what this formula is, but still probably not what it really means nor how to use it meaningfully, let alone deducing any useful corollary. However, with the text version, at least we can clearly see just how high the information density is in that formula, as even the information density for the text version isn't actually anything low. These 2 math examples aim to show that, HID, as long as being kept in moderation, is generally preferred over the LID counterparts. But once the information density becomes too unnecessarily and unreasonably high, the much more verbose versions seeming to be too verbose is actually preferred in general, especially when their information density isn't low. Some Examples Showing HID vs LID There are programming parallels to the above math analogy: terse and verbose codes. Unfortunately, different programmers have different capacities on facing information density, just like different people have different capacities on fathoming math. For instance, the ternary operator is a very obvious terse example on this(Javascript ES5): var x = condition1 ? value1 : condition2 ? value2 : value3; Whereas a verbose if/else if/else equivalent can be something like this: var x; if (condition1 === true) { x = value1; } else if (condition2 === true) { x = value2; } else { x = value3; } Those who're used to read and write terse codes will likely like the ternary operator version as the if/else if/else version will likely be just too verbose for them; Those who're used to read and write verbose codes will likely like the if/else if/else version as the ternary operator version will likely be just too terse for them(I've seen production codes with if (variable === true), so don't think that the if/else if/else version can only be totally made up examples). In this case, I've worked with both styles, and I guess that most programmers can handle both. Similarly, Javascript and some other languages support short circuit evaluation, which is also a terse style. For instance, the || and && operators can be short circuited this way: return isValid && (array || []).concat(object || canUseDefault && default); Where a verbose equivalent can be something like this(it's probably too verbose anyway): var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; Clearly the terse version has very HID while the verbose version has very LID. Those who can handle HID well will likely fathom the terse version instantly while needing minutes just to fathom what the verbose version's really trying to achieve and why it's not written in the terse version to avoid wasting time to read so much code doing so little meaningful things; Those who can only handle LID well will likely fathom the verbose version within minutes while probably giving up after trying to fathom the terse version for seconds and wonder what's the point of being concise when it's doing just so many things in just 1 line. In this case, I seriously suspect whether anyone fathoming Javascript will ever write in the verbose version at all, when the terse version is actually one of the popular idiomatic styles. Now let's try to fathom this really, really terse codes(I hope you won't face this in real life): for (var texts = [], num = min; num <= max; num += increment) {     var primeMods = primes.map(function(prime) { return num % prime; }); texts.push(primeMods.reduce(function(text, mod, i) { return (text + (mod || words[i])).replace(mod, ""); }, "") || num); } return texts.join(textSeparator); If you can fathom this within seconds or even instantly, then I'd admit that you can really handle ridiculously HID exceptionally well. However, adding these lines will make it clear: var min = 1, max = 100, increment = 1; var primes = [3, 5], words = ["Fizz", "Buzz"], textSeparator = "\n"; So all it's trying to do is the very, very popular Fizz Buzz programming test in a ridiculously terse way. So let's try this much more verbose version of this Fizz Buzz programming test: var texts = []; for (var num = min; num <= max; num = num + increment) {     var text = ""; var primeCount = primes.length; for (var i = 0; i < primeCount; i = i + 1) { var prime = primes[i]; var mod = num % prime; if (mod === 0) { var word = words[i]; text = text + word; } } if (text === "") { texts.push(num); } else { texts.push(text); } } return texts.join(textSeparator); Even those who can handle very HID well should still be able to fathom this verbose version within seconds, so do those who can only handle very LID well. Also, considering the inherent complexity of this generalized Fizz Buzz, the verbose version doesn't have much boilerplate, even when compared to the terse version, so I don't think those who can handle very HID well will complain about the verbose version much. On the other hand, I doubt whether those who can only handle very LID well can even fathom the terse version, let alone in a reasonable amount of time(like minutes), if I didn't tell that it's just Fizz Buzz. In this case, I really doubt what's the point of writing in the terse version when I don't see any nontrivial issue in the verbose version(while the terse version's likely harder to fathom). Back To The Math Analogy Imagine that a mathematician and math professor who's used to teach postdoc math now have to teach high school math to elementary math students(I've heard that a very small amount of parents are so ridiculous to want their elementary children to learn high school math even when those children aren't interested in nor good at math). That's almost mission impossible, but all that teacher can do is to first consolidate the elementary math foundation of those students while fostering their interest in math, then gradually progress to middle school math, and finally high school math once those students are good at middle school math. All those students can do is to work extremely hard to catch up such great hurdles. Unfortunately, it seems to me that it'd take far too much resources, especially time, when those who can handle very HID well try to teach those who can only handle very LID well to handle HID. Even when those who can only handle very LID well can eventually be nurtured to meet the needs imposed by the codebase, it's still unlikely to be worth it, especially for software teams with very tight budgets, no matter how well intentioned it is. So should those who can only handle very LID well train up themselves to be able to handle HID? I hope so, but I doubt that it's similar to asking a high school student to fathom postdoc math. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to apply actually basic math formulae that are just written in terse styles; Should those who can handle very HID well learn how to deal with LID well as well? I hope so, but I doubt that's similar to asking mathematicians to abandon their mother tongue when it comes to math(using words instead of symbols to express math). While it's possible, I still guess that most of us will think that it's so excessively ineffective and inefficient just to communicate with those who're very poor at math when discussing about advanced math. So it seems that maybe those who can handle HID well and those who can only handle LID well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HID well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HID well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HID well, which is overlooked The candidate screening process must be able to tell who can handle HID well Most importantly, the team must be able to hire enough candidates who can handle HID well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HID well should try their best to only touch the HID part of the codebase, while those who can only handle very LID well should try their best to only touch the LID part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. A Considerable Solution With an IDE supporting collapsing comments, one can try something like this: /* var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; */ return isValid && (array || []).concat(object || canUseDefault && default); Of course it's not practical when the majority of the codebase's so terse that those who can only handle very LID well will struggle most of the time, but those who can handle very HID well can try to do the former some favors when there aren't lots of terse codes for them. The point of this comment's to be a working compromise between the needs of reading codes effectively and efficiently for those who can handle very HID well, and the needs of fathoming code easily and simply for those who can only handle very LID well. Summary In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard.   Information Volume An Eating Analogy Let's say we're ridiculously big eaters who can eat 1kg of meat per meal. But can we eat all that 1kg of meat in just 1 chunk? Probably not, as our mouth just won't be big enough, so we'll have to cut it into digestible chunks. However, can we eat it if it becomes a 1kg of very fine-grained meat powder? Maybe, but that's likely daunting or even dangerous(extremely high risk of severe choking) for most of us. So it seems that the best way's to find a happy medium that works for us, like cutting it into chunks that are just small enough for our mouth to digest. There might still be many chunks but at least they'll be manageable enough. The same can be largely applied to fathoming codes, even though there are still differences. Let's say you're reading a well-documented function with 100k lines and none of its business logic are duplicated in the entire codebase(so breaking this function won't help code reuse right now). Unless we're so good at fathoming big functions that we can keep all these 100k lines of implementation details in our head as a whole, reading such a function will likely be daunting or even dangerous(extremely high risk of fathom it all wrong) for most of us, assuming that we can indeed fathom it within a feasible amount of time(like within hours). On the other hand, if we break that 100k line function into extremely small functions so that the function call stack can be as deep as 100 calls, we'll probably be in really big trouble when we've to debug these functions having bugs that don't have apparently obvious causes nor caught by the current test suite(no test suite can catch all bugs after all). After all, traversing such a deep call stack without getting lost and having to start all over again is like eating tons of very fine-grained meat powders without ever choking severely. Even if we can eventually fix all those bugs with the test suite updated, it'll still unlikely to be done within a reasonable amount of time(talking about days or even weeks when the time budget is tight). This brings the importance of information volume: A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. For instance, the above 100k line function has HIV whereas the above small functions with deep call stack has LIV. So it seems that the best way's to find a happy medium that can break that 100k line function into fathomable chunks on one hand, while still keeping the call stack manageable on the other. For instance, if possible, breaking that 100k line function into those in which the largest ones are 1k line functions and the ones with the deepest call stack is 10 calls can be a good enough balance. While fathoming a 1k line function is still hard for most of us, it's at least practical; While debugging functions having call stacks with 10 calls is still time-consuming for most of us, it's at least realistic to be done within a tight budget. A Small Example Showing HIV vs LIV Unfortunately, different software engineers have different capacities on facing information volume, just like different people have different mouth size. Consider the following small example(Some of my Javascript ES5 codes with comments removed): LIV Version(17 methods with the largest being 4 lines and the deepest call stack being 11) - $.result = function(note, argObj_) {     if (!$gameSystem.satbParam("_isCached")) {         return this._uncachedResult(note, argObj_, "WithoutCache");     }     return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) {     var result = this._uncachedResult(note, argObj_, "WithCache");     this._cache.updateResult(note, argObj_, result);     return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) {     if (this._rules.isAssociative(note)) {         return this._associativeResult(note, argObj_, funcNameSuffix);     }     return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) {     var partResults = this._partResults(note, argObj_, funcNameSuffix);     var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) {     var priorities = this._rules.priorities(note);     var funcName = "_partResult" + funcNameSuffix + "_";     var resultFunc = this[funcName].bind(this, note, argObj_);     return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) {     return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) {     var cache = this._cache.partResult_(note, argObj_, part);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) {     var result =             this._uncachedPartResult_(note, argObj_, part, "WithCache");     this._cache.updatePartResult(note, argObj_, part, result);     return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) {     var list = this["_pairFuncListPart" + funcNameSuffix](note, part);     if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) {     var list = this["_pairFuncList" + funcNameSuffix](note);     var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) {     return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) {     var cache = this._cache.pairFuncList_(note);     return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) {     var list = this._uncachedPairFuncList(note, "WithCache");     this._cache.updatePairFuncList(note, list);     return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) {     var funcName = "_pairFuncListPart" + funcNameSuffix;     return this._rules.priorities(note).reduce(function(list, part) {         return list.concat(this[funcName](note, part));     }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) {     var cache = this._cache.pairFuncListPart_(note, part);     return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) {     var list = this._pairFuncListPartWithoutCache(note, part);     this._cache.updatePairFuncListPart(note, part, list);     return list; }; $._pairFuncListPartWithoutCache = function(note, part) {     var func = this._pairs.pairFuncs.bind(this._pairs, note);     return this._cache.partListData(part, this._battler).map(func); }; HIV Version(10 methods with the largest being 12 lines and the deepest call stack being 5) - $.result = function(note, argObj_) {     if (!$gameSystem.satbParam("_isCached")) {         return this._uncachedResult(note, argObj_, "WithoutCache");     }     var cache = this._cache.result_(note, argObj_);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     // $._updatedResultWithCache START     var result = this._uncachedResult(note, argObj_, "WithCache");     this._cache.updateResult(note, argObj_, result);     return result;     // $._updatedResultWithCache END }; $._uncachedResult = function(note, argObj_, funcNameSuffix) {     if (this._rules.isAssociative(note)) {         // $._associativeResult START             // $._partResults START         var priorities = this._rules.priorities(note);         var funcName = "_partResult" + funcNameSuffix + "_";         var resultFunc = this[funcName].bind(this, note, argObj_);         var partResults =                  priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT);             // $._partResults END         var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult);         // $._associativeResult START     }     // $._nonAssociativeResult START     var list = this["_pairFuncList" + funcNameSuffix](note);     var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult);     // $._nonAssociativeResult END }; $._partResultWithoutCache_ = function(note, argObj_, part) {     return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) {     var cache = this._cache.partResult_(note, argObj_, part);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     // $._updatedPartResultWithCache_ START     var result =             this._uncachedPartResult_(note, argObj_, part, "WithCache");     this._cache.updatePartResult(note, argObj_, part, result);     return result;     // $._updatedPartResultWithCache_ END }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) {     var list = this["_pairFuncListPart" + funcNameSuffix](note, part);     if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._pairFuncListWithoutCache = function(note) {     return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) {     var cache = this._cache.pairFuncList_(note);     if (cache) return cache;     // $._updatedPairFuncListWithCache START     var list = this._uncachedPairFuncList(note, "WithCache");     this._cache.updatePairFuncList(note, list);     return list;     // $._updatedPairFuncListWithCache END }; $._uncachedPairFuncList = function(note, funcNameSuffix) {     var funcName = "_pairFuncListPart" + funcNameSuffix;     return this._rules.priorities(note).reduce(function(list, part) {         return list.concat(this[funcName](note, part));     }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) {     var cache = this._cache.pairFuncListPart_(note, part);     if (cache) return cache;     // $._updatedPairFuncListPartWithCache START     var list = this._pairFuncListPartWithoutCache(note, part);     this._cache.updatePairFuncListPart(note, part, list);     return list;     // $._updatedPairFuncListPartWithCache END }; $._pairFuncListPartWithoutCache = function(note, part) {     var func = this._pairs.pairFuncs.bind(this._pairs, note);     return this._cache.partListData(part, this._battler).map(func); }; In case you can't fathom what this example's about, you can read this simple flow chart(It doesn't mention the fact that the actual codes also handle whether the cache will be used): Even though the underlying business logic's easy to fathom, different people will likely react to the HIV and LIV Version differently. Those who can handle very HIV well will likely find the LIV version less readable due to having to unnecessarily traverse all these excessively small methods(the smallest ones being 1 liners) and enduring the highest call stack of 11 calls(from $.result to $._pairFuncListPartWithoutCache); Those who can only handle very LIV well will likely find the HIV version less readable due to having to unnecessarily fathom all these excessively mixed implementation details as a single unit in one go from the biggest method with 12 lines and enduring the presence of 3 different levels of abstractions combined just in the biggest and most complex method($._uncachedResult). Bear in mind that it's just a small example which is easy to fathom and simple to explain, so the differences between the HIV and LIV styles and the potential conflicts between those who can handle very HIV well and those who can only handle very LIV well will only be even larger and harder to resolve when it comes to massive real life production codebases. Back To The Eating Analogy Imagine that the size of the mouth of various people can vary so much that the largest digestible chunk of those with the smallest mouth are as small as a very fine-grained powder in the eyes of those with the largest mouth. Let's say that these 2 extremes are going to eat together sharing the same meal set. How should these meals be prepared? An obvious way's to give them different tools to break these meals into digestible chunks of sizes suiting their needs so they'll respectively use the tools that are appropriate for them, meaning that the meal provider won't try to do these jobs themselves at all. It's possible that those with the smallest mouth will happily break those meals into very fine-grained powders, while those with the largest mouth will just eat each individual food as a whole without much trouble. Unfortunately, it seems to me that there's still no well battle-tested automatic tools that can effectively and efficiently break a large code chunk into well-defined smaller digestible code chunks with configurable size and complexity without nontrivial side effects, so those who can only handle very LIV well will have to do it manually when having to fathom large functions. Also, even when there's such a tool, such automatic work's still effectively refactoring that function, thus probably irritating colleagues who can handle very HIV well. So should those who can only handle very LIV well train up themselves to be able to deal with HIV? I hope so, but I doubt that's similar to asking those with very small mouths to increase their mouth size. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to eat foods in chunks that are too large for them; Should those who can handle very HIV well learn how to deal with LIV well as well? I hope so, but I doubt that's similar to asking those with very large mouths to force themselves to eat very fine-grained meat powders without ever choking severely(getting lost when traversing a very deep call stack). While it's possible, I still guess that most of us will think that it's so risky and unreasonable just to eat foods as very fine-grained powders unless they really have no other choices at all(meaning that they should actually avoid these as much as possible). So it seems that maybe those who can handle HIV well and those who can only handle LIV well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HIV well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HIV well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HIV well, which is overlooked The candidate screening process must be able to tell who can handle HIV well Most importantly, the team must be able to hire enough candidates who can handle HIV well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HIV well should try their best to only touch the HIV part of the codebase, while those who can only handle very LIV well should try their best to only touch the LIV part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. An Imagined Solution Let's say there's an IDE which can display the function calls in the inlined form, like from: $.result = function(note, argObj_) {     if (!$gameSystem.satbParam("_isCached")) {         return this._uncachedResult(note, argObj_, "WithoutCache");     }     return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) {     var result = this._uncachedResult(note, argObj_, "WithCache");     this._cache.updateResult(note, argObj_, result);     return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) {     if (this._rules.isAssociative(note)) {         return this._associativeResult(note, argObj_, funcNameSuffix);     }     return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) {     var partResults = this._partResults(note, argObj_, funcNameSuffix);     var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) {     var priorities = this._rules.priorities(note);     var funcName = "_partResult" + funcNameSuffix + "_";     var resultFunc = this[funcName].bind(this, note, argObj_);     return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) {     return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) {     var cache = this._cache.partResult_(note, argObj_, part);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) {     var result =             this._uncachedPartResult_(note, argObj_, part, "WithCache");     this._cache.updatePartResult(note, argObj_, part, result);     return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) {     var list = this["_pairFuncListPart" + funcNameSuffix](note, part);     if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) {     var list = this["_pairFuncList" + funcNameSuffix](note);     var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) {     return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) {     var cache = this._cache.pairFuncList_(note);     return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) {     var list = this._uncachedPairFuncList(note, "WithCache");     this._cache.updatePairFuncList(note, list);     return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) {     var funcName = "_pairFuncListPart" + funcNameSuffix;     return this._rules.priorities(note).reduce(function(list, part) {         return list.concat(this[funcName](note, part));     }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) {     var cache = this._cache.pairFuncListPart_(note, part);     return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) {     var list = this._pairFuncListPartWithoutCache(note, part);     this._cache.updatePairFuncListPart(note, part, list);     return list; }; $._pairFuncListPartWithoutCache = function(note, part) {     var func = this._pairs.pairFuncs.bind(this._pairs, note);     return this._cache.partListData(part, this._battler).map(func); }; To be displayed as something like this: $.result = function(note, argObj_) {     if (!$gameSystem.satbParam("_isCached")) {         // $._uncachedResult START         if (this._rules.isAssociative(note)) {             // $._associativeResult START                 // $._partResults START             var priorities = this._rules.priorities(note);             var partResults = priorities.map(function(part) {                     // $._partResultWithoutCache START                         // $._uncachedPartResult_ START                             // $._pairFuncListPartWithoutCache START                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 var list = this._cache.partListData( part, this._battler).map(func);                             // $._pairFuncListPartWithoutCache END                 if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_);                         // $._uncachedPartResult_ END                     // $._partResultWithoutCache END             }).filter(_SATB.IS_VALID_RESULT);                 // $._partResults END             var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult);             // $._associativeResult START         }             // $._nonAssociativeResult START                 // $._pairFuncListWithoutCache START                     // $._uncachedPairFuncList START var priorities = this._rules.priorities(note);         var list = priorities.reduce(function(list, part) {                         // $._pairFuncListPartWithoutCache START             var func = this._pairs.pairFuncs.bind(this._pairs, note);             var l = this._cache.partListData( part, this._battler).map(func);                         // $._pairFuncListPartWithoutCache END             return list.concat(l);         }.bind(this), []);                     // $._uncachedPairFuncList END                 // $._pairFuncListWithoutCache END         var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult);             // $._nonAssociativeResult END         // $._uncachedResult END     }     var cache = this._cache.result_(note, argObj_);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     // $._updatedResultWithCache START         // $._uncachedResult START     var result;     if (this._rules.isAssociative(note)) {             // $._associativeResult START                 // $._partResults START         var priorities = this._rules.priorities(note);         var partResults = priorities.map(function(part) {                     // $._partResultWithCache START             var cache = this._cache.partResult_(note, argObj_, part);             if (_SATB.IS_VALID_RESULT(cache)) return cache;                         // $._updatedPartResultWithCache_ START                             // $._uncachedPartResult_ START                                 // $._pairFuncListPartWithCache START             var c = this._cache.pairFuncListPart_(note, part);             var list;             if (c) {                 list = c;             } else {                                     // $._updatedPairFuncListPartWithCache START                                         // $._uncachedPairFuncListPart START                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 list = this._cache.partListData( part, this._battler).map(func);                                         // $._uncachedPairFuncListPart END                 this._cache.updatePairFuncListPart(note, part, list);                                     // $._updatedPairFuncListPartWithCache END             }                                 // $._pairFuncListPartWithCache END             var result = undefined;             if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_);             }                             // $._uncachedPartResult_ END             this._cache.updatePartResult(note, argObj_, part, result);             return result;                         // $._updatedPartResultWithCache_ END                     // $._partResultWithCache END         }).filter(_SATB.IS_VALID_RESULT);                 // $._partResults END         var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult);             // $._associativeResult START     }             // $._nonAssociativeResult START                 // $._pairFuncListWithCache START     var cache = this._cache.pairFuncList_(note), list;     if (cache) {         list = cache;     } else {                     // $._updatedPairFuncListWithCache START                         // $._uncachedPairFuncList START var priorities = this._rules.priorities(note);         var list = priorities.reduce(function(list, part) {                             // $._pairFuncListPartWithCache START             var cache = this._cache.pairFuncListPart_(note, part);             var l;             if (cache) {                 l = cache;             } else {                                 // $._updatedPairFuncListPartWithCache START                                     // $._uncachedPairFuncListPart START                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 l = this._cache.partListData( part, this._battler).map(func);                                     // $._uncachedPairFuncListPart END                 this._cache.updatePairFuncListPart(note, part, l);                                 // $._updatedPairFuncListPartWithCache END             }             return list.concat(l);                 // $._pairFuncListPartWithCache END         }.bind(this), []);                         // $._uncachedPairFuncList END         this._cache.updatePairFuncList(note, list);                     // $._updatedPairFuncListWithCache END     }                 // $._pairFuncListWithCache END     var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult);             // $._nonAssociativeResult END         // $._uncachedResult END     this._cache.updateResult(note, argObj_, result);     return result;     // $._updatedResultWithCache END }; Or this one without comments indicating the starts and ends of the inlined functions: $.result = function(note, argObj_) {     if (!$gameSystem.satbParam("_isCached")) {         if (this._rules.isAssociative(note)) {             var priorities = this._rules.priorities(note);             var partResults = priorities.map(function(part) {                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 var list = this._cache.partListData( part, this._battler).map(func);                 if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_);             }).filter(_SATB.IS_VALID_RESULT);             var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult);         } var priorities = this._rules.priorities(note);         var list = priorities.reduce(function(list, part) {             var func = this._pairs.pairFuncs.bind(this._pairs, note);             var l = this._cache.partListData( part, this._battler).map(func);             return list.concat(l);         }.bind(this), []);         var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult);     }     var cache = this._cache.result_(note, argObj_);     if (_SATB.IS_VALID_RESULT(cache)) return cache;     var result;     if (this._rules.isAssociative(note)) {         var priorities = this._rules.priorities(note);         var partResults = priorities.map(function(part) {             var cache = this._cache.partResult_(note, argObj_, part);             if (_SATB.IS_VALID_RESULT(cache)) return cache;             var c = this._cache.pairFuncListPart_(note, part);             var list;             if (c) {                 list = c;             } else {                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 list = this._cache.partListData( part, this._battler).map(func);                 this._cache.updatePairFuncListPart(note, part, list);             }             var result = undefined;             if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_);             }             this._cache.updatePartResult(note, argObj_, part, result);             return result;         }).filter(_SATB.IS_VALID_RESULT);         var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult);     }     var cache = this._cache.pairFuncList_(note), list;     if (cache) {         list = cache;     } else { var priorities = this._rules.priorities(note);         var list = priorities.reduce(function(list, part) {             var cache = this._cache.pairFuncListPart_(note, part);             var l;             if (cache) {                 l = cache;             } else {                 var func = this._pairs.pairFuncs.bind(this._pairs, note);                 l = this._cache.partListData( part, this._battler).map(func);                 this._cache.updatePairFuncListPart(note, part, l);             }             return list.concat(l);         }.bind(this), []);         this._cache.updatePairFuncList(note, list);     }     var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult);     this._cache.updateResult(note, argObj_, result);     return result; }; With just 1 click on $.result. Bear in mind that the actual codebase hasn't changed one bit, it's just that the IDE will display the codes from the original LIV form to the new HIV form. The goal this feature's to keep the codebase in the LIV form, while still letting those who can handle HIV well to be able to read the codebase displayed in the HIV version. It's very unlikely for those who can only handle very LIV well to be able to fathom such a complicated and convoluted method with 73 lines and so many different levels of varying abstractions and implementation details all mixed up together, not to mention the really vast amount of completely needless code duplication that aren't even easy nor simple to spot fast; Those who can handle very HIV well, however, may feel that a 73 line method is so small that they can hold everything inside in their head as a whole very quickly without a hassle. Of course, one doesn't have to show everything at once, so besides the aforementioned feature that inlines everything in the reading mode with just 1 click, the IDE should also support inlining a function at a time. Let's say we're to reveal _uncachedPairFuncListPart: $._updatedPairFuncListPartWithCache = function(note, part) {     var list = this._uncachedPairFuncListPart(note, part);     this._cache.updatePairFuncListPart(note, part, list);     return list; }; Clicking that method name in the above method should lead to something like this: $._updatedPairFuncListPartWithCache = function(note, part) { // $._updatedPairFuncListPartWithCache START var func = this._pairs.pairFuncs.bind(this._pairs, note);     var list = this._cache.partListData( part, this._battler).map(func); // $._updatedPairFuncListPartWithCache END     this._cache.updatePairFuncListPart(note, part, list);     return list; }; Similarly, clicking the method name updatePairFuncListPart should reveal the implemention details of that method of this._cache, provided that the IDE can access the code of that class. Such an IDE, if even possible in the foreseeable future, should at least reduce the severity of traversing a deep call stack with tons of small functions for those who can handle very HIV well, if not removing the problem entirely, without forcing those who can only handle very LIV well to deal with HIV, and without the issue of fighting for refactoring in this regard. Summary In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard.   Combining Information Density With Information Volume Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Imagine that you're reading a well-documented 100k line function where almost every line's written like some of the most complex math formulae. I'd guess that even the best of the best software engineers will never ever want to touch this perverted beast again in their lives. Usually such codebase are considered dead and will thus be probably rewritten from scratch. Of course, HID + HIV isn't always this extreme, as the aforementioned 73 line version of $.result also falls into this category. Even though it'd still be a hellish nightmare for most software engineers to work with if many functions in the codebase are written this way, it's still feasible to refactor them into very high quality code within a reasonably tight budget if we've the highest devotions, diligence and disciplines possible. While such an iron fist approach should only be the last resort, sometimes the it's called for so we should be ready. Nevertheless, try to avoid HID + HIV as much as possible, unless the situation really, really calls for it, like optimizing a massive production codebase to death(e.g.: gameplay codes), or when the problem domain's so chaotic and unstable that no sane nor sensible architecture will survive for even just a short time(pathetic architectures can be way worse than none). If you still want to use this style even when it's clearly unnecessary, you should have the most solid reasons and evidence possible to prove that it's indeed doing more good than harm. Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First For instance, the below codes falls into this category: return isValid && (array || []).concat(object || canUseDefault && default); Imagine that you're reading a codebase having mostly well-defined and well-documented small functions(but far from being mostly 1 liners) but most of those small functions are written like some the most complex math formulae. While fathoming such codes at first will be very difficult, because the functions are well-documented, those functions will be easy to edit once you've fathomed it with the help of those comments; Because the functions are small enough and well-defined, those functions will be easy to use once you've fathomed how they're being called with the help of those callers who're themselves high quality codes. Of course, HID + LIV doesn't always mean small short term pains with large long term pleasures, as it's impossible to ensure that none of those abstractions will ever leak in nontrivial ways. While the codebase will be easy to work with when it only ever has bugs that are either caught by the test suite or have at least some obvious causes, such codebase can still be daunting to work with once it produces rare bugs that are hard to even reproduce, all because of the fact that it's very hard to form the full pictures with every last bit of nontrivial relevant detail of massive codebases having mostly small but very terse functions. Nevertheless, as long as all things are kept in moderation(one can always try in this regard), HID + LIV is generally advantageous as long as the codebase's large enough to warrant large scale software architectures and designs(the lifespan of the codebase should also be long enough), but not so large that no one can form the full picture anymore, as the long term pleasures will likely be large and long enough to outweigh short term pains a lot here. Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Think of an extremely verbose codebase having full of boilerplate and exceptionally long functions. Maybe those functions are long because of the verbosity, but you usually can't tell before actually reading them all. Anyway, you'll probably feel that the codebase's just wasting lots of your time once you realize that most of those long functions aren't actually doing much. Think of the aforementioned 28 line verbose Javascript examples having an extremely easy, simple and small terse 1 line counterpart, and think of the former being ubiquitous in the codebase. I guess that even the most verbose software engineers will want to refactor it all, as working with it'd just be way too ineffective and inefficient otherwise. Of course, LID + HIV isn't always that bad, especially when things are kept in moderation. At least, it'd be nice for most newcomers to fathom the codebase, so codebases written in this style can actually be very beginner-friendly, which is especially important for software teams having very high turnover rates. Even though it's unlikely to be able to work with such codebase effectively nor efficiently no matter how much you've fathomed it due to the heavy verbosity and loads of boilerplate, the problem will be less severe if it's short-lived. Also, writing codes in this style can be extremely fast at first, even though it'll gradually become slower and slower, so this style's very useful in at least prototyping/making PoCs. Nevertheless, LID + HIV shouldn't be used on codebases that'd already be very large without the extra verbosity nor boilerplate, especially when it's going to have a very long life span. Just think of a codebase that can be controlled into the 100k scale all with very terse codes(but still readable), but reaching the 10M scale because of complete refactoring of all those terse codes into tons of verbose codes with boilerplate. Needless to say, almost no one will continue on this road if he/she knows that the codebase will become that large that way. Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep For instance, the below codes fall into this category: /* This is the original codes $._chainedResult = function(list, note, argObj_, initVal_) {     var chainedResultFunc = this._rules.chainResultFunc(note); return chainedResultFunc(list, note, argObj_, initVal_);    }; */ // This is the refactored codes $._chainedResult = function(list, note, argObj_, initVal_) {     var chainedResultFunc = this._chainedResultFunc(note); return this._runChainedResult( list, note, argObj_, initVal_, chainedResultFunc); }; $._chainedResultFunc = function(note) {     return this._rules.chainResultFunc(note); }; $._runChainedResult = function(list, note, argObj_, initVal_, resultFunc) { return resultFunc(list, note, argObj_, initVal_); }; // Think of a codebase with less than 100k lines but with already way more than 1k classes/interfaces and 10k functions/methods. It's almost a given that the deepest call stack in the codebase will be so deep that it can even approach the 100 call mark. It's because the only way for very small functions to be very verbose with tons of boilerplate is that most of those small functions aren't actually doing anything meaningful. We're talking about deeply nested delegates/forwarding functions which are all indeed doing very easy, simple and small jobs, and tons of interfaces or explicit dependencies having only 1 implementation or concrete dependency(configurable options with only 1 option ever used also has this issue). Of course, LID + LIV does have its places, especially when the business requirements always change so abruptly, frequently and unpredicably that even the most reasonable assumptions can be suddenly violated without any reason at all(I've worked with 1 such project). As long as there can still be sane and sensible architectures that can last very long, if the codebase isn't flexible in almost every direction, the software teams won't be able to make it when they've to implement absurd changes with ridiculously tight budgets and schedules. And the only way for the codebase to be possible to be so flexible is to have as many well-defined interfaces and seams as possible, as long as everything else is still in moderation. For the newcomers, the codebase will seem to be overengineered over nothing already happened, but that's what you'd likely do when you can never know what's invariant. Nevertheless, LID + LIV should still be refactored once there are solid reasons and evidences to prove that the codebase can begin to stablize, or the hidden technical debt incurred from excessive overengineering can quickly accumulate to the point of no return. At that point, even understanding the most common call stack can be almost impossible. Of course, if the codebase can really never stablize, then one can only hope for the best and be prepared for the worst, as such projects are likely death marches, or slowly becoming one. Rare exceptions are that, some codebases have to be this way, like the default RPG Maker MV codebase, due to the business model that any RPG Maker MV user can have any feature request and any RPG Maker MV plugin developer can develop any plugin with any feature. Summary While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep   Teams With Programmers Having Different Styles Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes While both can work with very HID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The latter values codebase quality over software engineer mental capacity due to their limits on taking information volume, while the former values the opposite due to their exceptionally strong mental power. Thus the former will likely think of the latter as being too weak to fathom the codes and they're thus the ones to blame, while the latter will probably think of the former as having too little architecture in mind and they're thus the ones to blame, as architectures that are beneficial or even necessary for the latter will probably be severe obstacles for the former. Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things While both can work with very HIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values function simplicity over function capabilities due to their limits on taking information density, while the former values the opposite due to their extremely strong information density decoding. Thus the former will likely think of the latter as doing too little things that actually matter in terms of important business logic as simplicity for the latter means time wasted for the former, while the latter will probably think of the former as being too needlessly complex when it comes to implementing important business logic, as development speed for the former means complexity that are just too high for the latter(no matter how hard they try). Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak It's clear that these 2 groups are at the complete opposites - The former preferring massive balls of complicated and convoluted spaghetti legacy over too many small functions with the call stacks being too deep due to the heavy need of optimizing the codebase to death, while the latter preferring the opposite due to the heavy need of making the codebase very flexible. Thus the former will likely think of the latter as over-engineering freaks while the latter will probably think of the former as over-optimization freaks, as codebase optimization and flexibility are often somehow at odds with each other, especially when one is heavily done. Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose It's clear that these 2 groups are at the complete opposites - The former preferring otherwise high quality codes that are hard to fathom at first over excessively verbose codes with tons of redundant boilerplate due to the heavy emphasis on the large long term pleasures, while the latter preferring the opposite due to the heavy emphasis on the small short term pains. Thus the former will likely think of the latter as being too messy and verbose while the latter will probably think of the former as being too concise and organized, as long term pleasures from the high codebase qualities are often at odds with short term pains of the newcomers fathoming the codebase at first, especially when one is heavily emphasized over the other. Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient While both can only work with very LIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values the learning cost over maintenance cost(the cost of reading already fathomed codes during maintenance) due to their limits on taking information density, while the former values the opposite due to their good information density skill and reading speed demands. Thus the former will likely think of the latter as being too ineffective and inefficient when writing codes that are easy to fathom in the short term but time-consuming to read in the long term, while the latter will likely think of the former as being too harsh to newcomers when writing codes that are fast to read in the long term but hard to fathom in the short term. Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles While both can only work with very LID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The former values codebase beginner friendliness over software flexibility due to their generally lower tolerance on very small functions, while the latter values the opposite due to their limited information volume capacity and high familiarity with very small and flexible functions. Thus the former will likely think of the latter as being too flexible towards cases that are almost impossible to happen under the current business requirements due to such codebases being generally harder for newcomers to fathom, while the latter will likely think of the former as being too friendly towards beginners at the expense of writing too rigid codes due to codebases being beginner friendly are usually those just thinking about the present needs. Summary It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles   Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective.  

DoubleX

DoubleX

 

Even if numbers don't lie, they can still be easily misinterpreted

Let's start with an obvious example(example 1): Virus A has the average fatality rate of 10%(1 death per 10 infections on average) Virus B has the average fatality rate of 1%(1 death per 100 infections on average) Which virus is more dangerous towards the majority? If you think that the answer must be always virus A, then you're probably very prone to misinterpreting the numbers, because you're effectively passing judgments with too little information in this case. What if I give you their infection rates as well? Virus A has the average infection rate of 2 every week(every infected individual infects 2 previously uninfected ones per week on average) Virus B has the average infection rate of 5 every week(every infected individual infects 5 previously uninfected ones per week on average) First, let's do some math on the estimated death numbers after 4 weeks: Virus A death numbers = 2 ^ 4 * 0.1 = 1.6 Virus B death numbers = 5 ^ 4 * 0.01 = 6.25 The counterparts after 8 weeks: Virus A death numbers = 2 ^ 8 * 0.1 = 25.6 Virus B death numbers = 5 ^ 8 * 0.01 = 3906.25 I think it's now clear enough that, as time progresses, the death numbers by virus B over that of virus A will only be larger and larger, so this case shows that, the importance of infection rates can easily outclass that of the death rates when it comes to evaluating the danger of a virus towards the majority. Of course, this alone doesn't mean that virus B must be more dangerous towards the majority, but this is just an easy, simple and small example showing that how numbers can be misinterpreted, because in this case, judging from a single metric alone is normally dangerous.   Now let's move on to a more complicated and convoluted example(example 2): Country A, having 1B people, has 1k confirmed infection cases of virus C after 10 months of the 1st confirmed infection case of that virus in that country Country B, having 100M people, has 100k confirmed infection cases of virus C after 1 month of the 1st confirmed infection case of that virus in that country Which country performed better in controlling the infections of virus C so far? Now there are 3 different yet interrelated metrics for each country, so the problems of judging from a single metric is gone in this example, therefore this time you may think that it's safe to assume that country A must have performed better in controlling the infections of virus C so far. Unfortunately, you're likely being fooled again, especially when I give you the numbers of tests over virus C performed by each country on that country: Country A - 10k tests performed over virus C on that country Country B - 10M tests performed over virus C on that country This metric on both country, combined with the other metrics, reveal 2 new facts that point to the opposite judgment: Country A has just performed 10k / 10 / 1B = 0.0001% number of tests over virus C on that country over its populations per month on average, while country B has performed 10M / 100M = 10% on that regard 1k / 10k = 1 case out of 10 tested ones is infected in country A on average, while that in country B is 100k / 10M = 1 out of 100 So, while it still doesn't certainly imply that country B must have performed better in controlling the infections of virus C so far, this example aims to show that, even using a set of different yet interrelated metrics isn't always safe from misinterpreting them all.   So, why numbers can be misinterpreted so easily? At the very least, because numbers without contexts are usually ambiguous or even meaningless, and realizing the existence of the missing contexts generally demands relevant knowledge. For instance, in example 2, if you don't know the importance of the number of tests, it'd be hard for you to realize that even the other 3 metrics combined still don't form a complete context, and if most people around the world don't know that, some countries can simply minimize the number of tests performed over virus C on those countries, so their numbers will make them look like that they've been performing incredibly well in controlling the infections of virus C so far, meaning that numbers without contexts can also lead to cheating by being misleading rather than outright lying. Sometimes, contexts will always be incomplete even when you've all the relevant numbers, because some contexts contain some important details that are very hard to be quantified, so when it comes to relevant knowledge, knowing those details are crucial as well.   Let's consider this example(example 3) of a team of 5 employees who are supposed to handle the same set of support tickets every day, and none of them will receive any overtime compensations(actually having overtime will be perceived as incompetence there): Employee A, B, C and D actually work on the supposed 40 hour work week every week, and each of them handles 20 support tickets(all handled properly) per day on average Employee E actually works on 80 hour work week on average instead of the supposed 40, and he/she handles 10 support tickets(all handled properly) per day on average Does this mean employee E is far from being on par with the rest of the team? If you think the answer must be always yes, then I'm afraid that, you've yet again misused those KPIs, because in this case, the missing contexts at least include the average difficulty of the support tickets handled by those employees, and such difficulty is generally very hard to quantify. You may think that, as all those 5 employees are supposed to handle the same set of support tickets, the difficulty difference among the support tickets alone shouldn't cause such a big difference among the apparent productivity between employee A, B, C and D, and employee E. But what if I tell you that, it's because the former 4 employees have been only taking the easiest support tickets since day 1, and all the hardest ones are always taken by employee E, which is due to the effectively dysfunctional internal reporting mechanisms against such workplace bullying, and employee E is especially vulnerable to such abuses? Again, whether that team is really that toxic is also very hard to be quantified, so in this case, even if you've all the relevant KPIs on the employee performance, those KPIs as a single set can still be very misleading when it's used on its own to judge their performance.   Of course, example 3 is most likely an edge case that shouldn't happen, but that doesn't mean such edge cases will never appear. Unfortunately, many of those using the KPIs to pass judgment do act as if those edge cases won't ever exist under their management, and even if they do exist, those guys will still behave like it's those edge case themselves that are to be blamed, possibly all for the illusory effectiveness and efficiencies. To be blunt, this kind of "effectiveness and efficiency" is indeed just pushing the complexities that should be at least partially handled by those managers to those edge case themselves, causing the latter to suffer way more than what they've been already suffering even without those extra complexities that are just forced onto them. While such use of KPIs do make managers and the common cases much more effective and efficient, they're at the cost of sacrificing the edge cases, and the most dangerous part of all is that, too often, many of those managers and common cases don't even know that's what they've been doing for ages. Of course, this world's not capable to be that ideal yet, so sometimes misinterpreting the numbers might be a necessary or lesser evil, because occasionally, the absolute minimum required effectiveness and efficiencies can only be achieved by somehow sacrificing a small amount of edge cases, but at the very least, those using the KPIs that way should really know what they're truly doing, and make sure they make such sacrifices only when they've to.   So, on one hand, judging by numbers alone can easily lead to utterly wrong judgments without knowing, while on the other hand, judging only with the full context isn't always feasible, practical nor realistic, therefore a working compromise between these 2 extremes should be found on a case-by case basis. For instance, you can first form a set of educated hypotheses based on the numbers, then try to further prove and disprove(both sides must be worked on) those hypotheses on one hand, and act upon them(but always keep in mind that those hypotheses can be all dead wrong) if you've to on the other, as long as those hypotheses haven't been proven to be all wrong yet(and contingencies should be planned for so you can fix the problems immediately). With such a compromise, effectiveness and efficiency can be largely preserved when those hypotheses work because you're still not delaying too much when passing judgments, and the damages caused by those hypotheses when they're wrong can also be largely controlled and contained because you'll be able to realize and correct your mistakes as quickly as possible. For instance, in example 3, while it’s reasonable to form the hypothesis that employee E is indeed far from being on par with the rest of the team, you should, instead of just acting on those numbers directly, also try to have a personal meeting with that employee as soon as possible, so you can express your concerns on those metrics to him/her, and hear his/her side of the story, which can be very useful on proving or disproving your hypothesis, causing both of you to be able to solve the problem together in a more informed manner.

DoubleX

DoubleX

 

reCreations

I've been coding in RM for a while now, specifically MV. I've gotten pretty good at it, too XD   As time went on, the things I was coding changed, from things that were trivial and honestly didn't need to be coded, into things that were a bit smarter, more focused.  One of the earliest examples was a Utility script.  It had silly little shortcut calls for things like Game Variables and Game Switches. It also had an awful implementation of a well intentioned idea - saved Show Picture Settings. I found the Show Picture script call to be quite argument heavy, and Move Picture was no less so. I wanted to only supply the arguments I was changing for the Move Picture command. The idea was great, implementation was a JOKE!    Nowadays, I do much better. I follow conventions, I comment, I plan, I do a lot that I should do. However, with coding, you're always learning. You always find something new, or pick up a new trick, or realize you've been doing something wrong, etc.  Refactoring is a huge......factor, in scripting. Doing it too often could actually make compatibility nightmares and headaches where they don't need to be.    Right now, I'm actually going through a lot of old plugins, and rewriting them completely with either the same name, or a new one, prefixed with 're'   reAction is my redo of my animation script rEvent is my redo of my event copier/spawner reTool is my most current utility script reStorage is my redo of an old storage system   All of them except reStorage are complete.  I also remade my image cache, calling it 'reCache', but even since then, I've remade it again! Right now, though, it's fantastic.   So then, for this blog entry, that's what I've been doin :) Time for sleep    

Arrpeegeemaker

Arrpeegeemaker

 

Entry 015: 'Cuties' (Twatties)

I've not seen the movie (Netflix is and will always be a wealth-based privilege), but I did watch the review by SidAlpha, and from what I gathered, it is less about 'coming of age' than it is about twats being rebellious, something I can relate to but nothing like that. Apart from this, I really don't have much to say.

I don't think that the intent of the creator was malicious, but in the end, it's not about intent, but about content; the content is trash/poorly cobbled together storytelling.

Yes; twat and brat are the same thing. Brat is 'MuRiCaN' terminology, but all the same, all the same, English is such an odd language.


Anyway, arbitrary age and maturity do not go hand-in-hand; one only need to see what Donald Trump is tweeting for one prime example of many, of how immature 'adults' almost always are. I used to know one person, who at the time, was 'underage' but also homeless and no one cared otherwise. She performed sex work for money, and last I knew, had been victimized by thugs with badges and guns because her client's arbitrary age was much higher. If not a victim of theft because I don't know if she got paid beforehand (the thugs were lying in wait), at the very least, they only went after the client and didn't provide her with any aid at all. I've not seen or heard from her in a long time. Maybe she found a way out, maybe she's buried six feet under, or incarcerated, Odinn only knows... I do know one thing. She knew, and understood. She understood the potential domino effect, and to me, that's more than mature enough.

PhoenixSoul

PhoenixSoul

 

dwimmerdelve Yay, Did some more work on Dwimmerdelve stuff.

Yay, I have actually been doing stuff related to my game for once! For example. made a basic sprite for a character I have been thinking about adding to my game for a while. I present to you, "The Azure Demon":     She's the owner of one of the dungeon areas in my game, a mansion near a misty lake I called "The Azure Demon Mansion". I know, I know, a color themed 'evil creature' girl who owns a mansion, never been done before! She also has an army of ninja maids! 'Original character do not steal'. Maybe I should populate the mansion with more wholly original characters! Like a lazy gatekeeper, a sickly librarian with a cute imp assistant, a head maid that can stop time, and a sister she keeps locked away in the basement because she's too wild. Woah there maybe I should spread some of that originality around a bit!   Thinking of having her actual name be something like Lapidea Lazul Lājevard.   ... Though I could imagine her fairy name being Lazzy Lass Glasgeamchloch if she was a fairy. Personally I think that name is muuuuch cooler. Maybe I will use that as a pet name! Think it would annoy her?   Edit: Oh! Almost forgot to say what her roll in my game is! Looking like she is going to be the second major boss in my game, and probably going to be the leading character in what I have called before the 'demon subplot'. She's also cute as a button. That's a very important plot point that needs to be addressed up front. So be warned, I intend to make this demon super duper cute.

Kayzee

Kayzee

 

Entry 00D: RPG Maker MZ

Well, there's not a whole lot I can say as of now, but from what I have seen, MZ looks very promising, and I love seeing coders dropping their code projects here (if only graphic artists would do so as well lolz), but until I get a fair chance to try it for myself (is there a demo version?), I cannot really commentate on it.

I mean, four mapping layers plus the 'auto' option does make it sound like mapping will be very dynamic potentially, which that alone has caught my interest enough to wanna see more, but though I own MV, the cost is still far too high and way more than I ever have in my Steam Wallet. So, I will wait...

It may be five, six years before I see MZ, if ever, but, that's how being underprivileged goes.

PhoenixSoul

PhoenixSoul

 

Entry 014: Con-Vid-19

I actually have very little to say that doesn't repeat what I've said before.

Fact is, what you hear from mainstream media and the government is not backed by anything but inflated numbers and fearmongering. Even those whom have said they know of someone near their circle of influence that has the infection cannot be certain that they're not being lied to. Why cover it up, @Kayzee? So that the evidence that proves the deaths to be anything but the 'pandemic' (PLANDEMIC) can never be exposed. I once believed there to be some plausibility to this, but I've not seen anything that backs this up.

It's a scam, and one in the works for a long time. Goes as far back as the fraud cover-up known as 9/11 that destroyed mountains of financial records of the government's fraudulent acts, and likely even farther. I'm going to keep this short and simple. #FOLLOWTHEMONEYTRAILS

PhoenixSoul

PhoenixSoul

 

rmmv Event Shops for MV/MZ

With MZ around the corner in a week, I've got plans to port things over from MV to MZ, revamp some older stuff, maybe try some new things.   Got a plugin request recently to make a shop that will appear "on the map". While building it, I noticed events could actually run while you're shopping, and then found a way to allow you to control the actual shop itself with a parallel process event.    

Tsukihime

Tsukihime

 

euthanization Entry 013: Goodbye, Oscar...

Tomorrow...July 2, 2020...
  My doggie, Oscar, is getting euthanized.
Why?
Let us just say that, for his breed he's well past his prime (a pug), and as a result...

1) is 94% blind, 96% deaf
2) can't walk or stand very well
3) bumps into everything, lack of spatial awareness
4) has issues with breathing and eating (the former an issue his whole life but is worse now)
5) jumps at the slightest thing
6) is obviously miserable
7) can't really be bothered to do much more than sleep
"How old is he?"
Had him since early 2005. He was a teeny little furball back then.
"I'm sorry."
It's alright; at least he's in somewhat better health than Max was back when he had to be euthanized back in 2016. You're gonna ask, so... Max was my other pug; got him mid 2003 and he was slightly more mature. His health declined quickly after 2011. He got to a point where he was completely blind and his eyes had turned blue, was completely deaf with a very shrill bark, and barely could move, eat or drink. I remember still, the last time I carried him down the stairs to go outside, and about two hours later, he had a seizure he never fully recovered from. Yeah.
I'm glad Max is gone; he no longer suffers at least. I will feel the same way about Oscar's euthanization, but, the catch is that even though I know he suffers, it's far less obvious, at least until you observe him walking over things and bumping into walls...
So...losing Oscar is going to hit me harder. I loved giving him rubs and scritches, and when he could keep up with my rhythm, walking him (I stopped walking him because he just cannot keep up anymore so I have someone else aiding me there since I have autism-related issues with my rhythm being upset like that). Likely, I won't be getting another pet. Actually, it might be good to just not have one at all for a while. However, my depression is only going to be worsened by this...
Anyway, just wanted to say something about this, in case something comes up down the line as a result of the tragedy...
May the Divines guide us all...

PhoenixSoul

PhoenixSoul

 

Entry 00C: Ramsey and More Problems

So, I've hit yet another roadblock, and I've no clue how to go around it.

I don't have the endurance to stress over these, so if this is something I'm forced to solve on my own, Ramsey v0.1.5 is as far as this goes, and will be the last RM project I do any level of serious work on. It's too much for me to try to create something that I cannot fit into a lacking skill set.

Okay, so you're wondering what the issue is, and why I didn't post this in the RGSS boards.
The latter reason is that I need to talk about this in an open fashion and I can't do that without reprimand in the RGSS board.

The issue:
So, I've used Moby's Sprite Bugfix that makes it so that sprites taller than 32 px aren't affected stupidly by Star Passability tiles.
I've already made one fix that doesn't take screen tone into effect (made an extra viewport and switched around some viewport functions), but now I've run into one more.

I have custom ceiling tiles that when placed, obscure the player. They're in Tileset E. This issue was not present before these tiles were implemented (and thus the event system used to hide the player and then make the player through before undoing those after going through a door or under a ceiling transition was removed once the tiles were implemented), and that issue is that event graphics taller than the player sprite, will get drawn over the player sprite, when the player IS STANDING IN FRONT OF THE EVENT. IT LOOKS BAD, AND MAKES ME FEEL LIKE A DAMN CHUMP.

I had tried several ways to make it not do that, even trying to make it draw over the player sprite if the Y COORDINATES OF THE PLAYER SPRITE WERE LESS THAN THE EVENT AND THAT THREW ERRORS DESPITE USING $GAME_PLAYER.Y AND $GAME_MAP.EVENTS[@EVENT_ID].Y WHICH ARE ALREADY DEFINED BY THE CLASS GIVEN.

So, I asked dearest love @Kayzee for aid, and...naught. She gave an idea to make the events in question Below Characters which not only doesn't help, but gives them Through flags to boot. These door events may as well be guillotine events!

So, am I going to get some help here!? Or, do I let Ramsey's deadbeat father kill her instead!?!?

PhoenixSoul

PhoenixSoul

A blank slate

Many things can change within a week or a month, let alone six years.   Looking at the past can be fun but sometimes its pretty embarrassing, it was more like the latter for me when i dropped back here again after so long. I deleted or hidden most of my old stuff, for the better, trust me.   I've said many things in varying degrees of stupid, but at least i've grown and know just how stupid this all was, im not perfect yet but i did improve. So yeah this post is a sort of catch-up for the few people that interacted with me in this community. Im not dead just life taking you places and also realising how stupid you once were. I also own an apology to certain people that actually expected things from me but never heard from me since, super late but sorry....   I wont make any promises for anything because god knows if i can keep them, but lets just say things will be different from now on.

UberMedic7

UberMedic7

 

Entry 00B: Ramsey and a problem I just discovered

Okay, so I've mostly been working on mapping, and getting art assets taken care of, among other things, but I've come to realize that somewhere down the line...

Saving is no longer working, and that's going to be a huge problem for v.1.0. Right now, saving isn't needed to enjoy the demo, but I intend to have the first chapter of the game story done by the first non-demo version, and in that regard, I need saving to work properly. Thing is, I don't know why it doesn't work. So, I will need help with this, and hopefully, @Kayzee can find time between sessions of that new Switch game (that I'll likely never play due to my wealth privilege level being lower), to aid me. If not...hmmm...

Well, I'll cross that bridge when I get there I guess. I'm still waiting on one thing from one artist, who finally has their laptop all setup, and that's great.

I actually am curious about setting system functions like saving to game switches, if that breaks them. I mean, I set the menu command to a switch is all; it won't appear if the switch is off but I wouldn't of thought that would disable the functionality. Hmmm...

Eh. I've a lot on my mind and it is difficult to focus on gamedev, and fuck, even on gaming. I barely can do much of anything without being distracted by this or that, and it's too damn much! I need help here, but I'm most certainly not getting it! Yeah! I need help here!!!! (and asking for it from people offline gets me a 'fuck-you-but-I-won't-actually-say-it' response)

I love the story I have come up with and I want to deliver it; so can I get some damn help here!? Is that too much to ask!?!?!?!?!?!?!!??

PhoenixSoul

PhoenixSoul

 

Entry 00A: Ramsey, 2/29/2020 (COUNT ON IT)

(clamps hands over ears to silence the cheers, the jeers and the horns)   ALRIGHT ALREADY!!!!   lolz

Anyway, there's not much to say more than yes, those of you who were hoping I'd finally release a demo, are gonna get your wish.   However, there are caveats to note:
I'm still working on a lot of things, and right now, I don't really have a lot of critical things in place, so the first demo release is not going to be feature-filled.

What can be expected:
A short demo, mostly text by the main character and/or the narrator.
Half an hour of gameplay tops A good amount of potential bugs and glitches, mayhaps even Interpreter crashes (need your help in testing!) Some 'easter eggs' Plenty of Ramsey's vulgarities No censorship No saving (kinda pointless as there's only one major event) Mack's/Looseleaf sprite graphics combined with MV hair (it works well honestly) and MV-style face sets Political/Religious references and mockery plus a strong LGBT+ representation (now and going forward!)   What's being worked on: The battle system and battler graphics (I have not found graphics for things like police, military, and so on-do they even exist?), no battles yet
Inventory items, shops and the mercantile system (as of now the third of the three is pretty broken) Clothing/face bits for all twenty-four possible playable characters (only a few have their basic clothing all done with some others having placeholders) Story is still being fleshed out (I have a lot already written down)
Icon graphics (a lot are done but there's much left to do)
Title screen art (gonna need help with this-somehow)
Sorcery system and passive skills (I have a few of the latter done as well as one Destruction element of the former done)
Relationship system possibilities (mostly just a proof of concept as of now)
Custom Map UI (as in a scene that shows a map screen and where certain things are on it) A soft game menu screen that allows the player to leave the game but still technically be playing the game (I have the visuals made but they're not refined yet)
  What's planned:
Multiple endings (I have one 'true' ending in mind) Multiple, replayable game story chapters akin to how Half Minute Hero does it
Character building that caters to the player NSFW/Mature scenes (I'd love to have these as visual scenes but for now they will be text only)
A way to force game/chapter restarts (Hard Game Overs) in the case of something truly horrific happening (I don't know if it is possible though) Cameos from other games and media (I already have a couple of these partially finished)

Supporters and those whom have aided me thus far:
My team (KBGaming): Aylee, Celica, Claire, Fiena, and Rachael
Dearest love @Kayzee, whom has been great with RGSS3 stuffs
@Rikifive, whom has also aided me in RGSS3 stuffs
@AutumnAbsinthe, fellow femme fair who loves horror and memes as much as anyone @Verdiløs Games, whom hasn't been around in a bit, is hopefully recovering well.   You, among many others, have aided and supported me. Merci beaucoup. I hope you enjoy my vision.

PhoenixSoul

PhoenixSoul

 

Dead Evolution

Hello everyone! I'm Abby Freeman and I am officially announcing my venture into survival horror - Dead Evolution.   WHAT IS DEAD EVOLUTION? Dead Evolution is a survival horror RPG game. You play as Timothy Wong, a man looking to help make a cure for the zombie virus. Well, technically, there already was a cure released via an aerosol agent. It just made the zombies mutate. Some gained sentience, wandering the wasteland fully aware and yet unable to speak or remember their pasts; other gained new abilities, like screaming or running. As Timothy, you're searching for one of the sentient zombies to bring back to your home base... But there are complications, such as your communication device breaking, nearly running out of ammo, being unable to track the zombie right away, etc. that prevent this from being too easy. But the game doesn't end once you get back to the base - all of that was just the first half of the game. The second half involves more zombie killing, saving your child and a BFG. There are companions, but they can't help in combat outside of certain scripted circumstances.   Think 7 Days To Die meets Days Gone meets Resident Evil.   WHO ARE THE MAIN CHARACTERS? The guy you'll see the most of is Timothy Wong, a former high school science teacher-turned-combat scientist. He's good natured and kind, always looking out for others. In other words, the perfect candidate for the cure effort. He's mastered using guns and knives against the zombies. Outside of Timothy, there's not really another "major character". For NPCs though, there are several. There's Brandi Mitchell, a sweet lady with a heart of gold and a steel-cored baseball bat. There's Becky Nelson, who thinks backstabbing is super fun - literally and figuratively - along with pretending to be a damsel in distress. There's Mx Knight, a trader of sorts who will give you what you need at a small price - and if you chip in one extra sandwich or something, you'll get extra ammo absolutely free*! There's also the reinforcements team you can call in - Destiny and Arnold - who are useful, but definitely don't care about the cause. Destiny is there to drive like a maniac and Arnold just wants to kill things. There are survivor settlements in some locations, but aside from trading and resting, there isn't much you can do. There's also everyone down in the base, most notably Mr Waldon, Timothy's boss, and Timothy's 13-year-old child, but they don't come into play until the second half.   WHAT ELSE DO YOU WANT TO TELL US? The game is a throwback to survival horror. Puzzles, limited ammo and health items, save points, an emphasis on fear, etc. The game also combines survival horror with survival. Sure, ammo is limited, but you can also make it. Sure, health is limited, but you can also make it. I want the survival game part to add a faint glimmer of hope to the desolate situation. Also, there's a small thing at the beginning where you can customize Timothy a little to alter a few stats and unlock certain things earlier than normal (Was he a P.I after he was a teacher? Was he good at gym in high school? Did he live in a slum, a small town, a town, a city, or a big city?), as well as tune him a little more to your liking (What is his child's name and gender? What gender was his spouse?) Some things will always remain the same - he had a loving family, became a teacher, took another career to help even more people, married a wonderful person and adopted an amazing child - then, his job was rendered obsolete by the apocalypse. He joined the cure effort to continue to help. His spouse was ripped to shreds by zombies. Now he's making sure this cure gets made so hopefully his child gets a happy life.   I might also do early access.   WHAT ELSE WOULD YOU LIKE TO KNOW? Let me know in the comments! I'll answer anything!   *Ignore the fact that you have to donate a sandwich to get the free ammo.

AutumnAbsinthe

AutumnAbsinthe

×
Top ArrowTop Arrow Highlighted