Jump to content

DoubleX

Member
  • Content Count

    1,014
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by DoubleX

  1. Purpose Lets you directly edit various built-in global formulae Video Games using this plugin None so far Parameters Script Calls Plugin Command Prerequisites Terms Of Use Contributors Changelog Download Link Demo Link
  2. Note This script is extremely similar to Yanfly Engine Ace - Battle Engine Add-On: Enemy HP Bars so crediting DoubleX or his alias will violate this script's terms of use. Prerequisites Yanfly Engine Ace - Ace Battle Engine(Created by Yanfly) Script name DoubleX RMVXA Enemy MP/TP Bars Addon to Yanfly Engine Ace - Ace Battle Engine Author DoubleX: - This script Yanfly: - Yanfly Engine Ace - Ace Battle Engine Terms of use Same as that of Yanfly Engine Ace - Ace Battle Engine except that you're not allowed to give DoubleX or his alias credit Introduction Displays the enemy mp or/and tp bars Video https://www.youtube.com/watch?v=C9NAGpaP230 Features Almost no scripting knowledge is needed to use this script(some is needed to edit it) Instructions Open the script editor and put this script into an open slot between the script Yanfly Engine Ace - Ace Battle Engine and Main. Save to take effect. Compatibility Same as that of Yanfly Engine Ace - Ace Battle Engine FAQ None Changelog v1.03a(GMT 1000 10-10-2022): - Added MP_PERCENTAGE_DECIMAL_DIGHT_NUMBER and TP_PERCENTAGE_DECIMAL_DIGHT_NUMBER v1.02b(GMT 1000 21-5-2016): - Fixed not updating mp/tp bar fill ratio for mp/tp change on battle start - If mmp is 0, the mp bar will be fully filled to show the mmp is 0 v1.02a(GMT 0200 6-4-2015): - Added MP_CRISIS_TEXT_COLOR v1.01e(GMT 0900 14-2-2014): - Fixed the mp and tp bars of the hidden enemies being shown bug - Further increased efficiency and reduced lag v1.01d(GMT 0400 4-10-2014): - Further increased efficiency and reduced lag v1.01c(GMT 0300 4-9-2014): - Updated compatibility with DoubleX RMVXA Percentage Addon to Yanfly Engine Ace - Battle Engine Add-On: Enemy HP Bars v1.01b(GMT 0500 2-9-2014): - Further increased efficiency and reduced size of this script - Reduced lag induced from v1.00b efficiency upgrade v1.01a(GMT 1700 1-9-2014): - Added mp and tp texts x and y offsets relative to respective bars - Added MP_TEXT_COLOR and TP_TEXT_COLOR v1.00b(GMT 1600 1-9-2014): - Fixed undesirable results when text size > bar height - Increased efficiency and reduced size of this script v1.00a(GMT 0500 29-8-2014): - 1st version of this script finished Download Link
  3. Updates v1.03a(GMT 1000 10-10-2022): - Added MP_PERCENTAGE_DECIMAL_DIGHT_NUMBER and TP_PERCENTAGE_DECIMAL_DIGHT_NUMBER
  4. Just declined the renewal of the contract of the job(because I foresee that the situation will probably go from very nice to really tough soon) and ended the current one several days ago, all after working there for 2 years, and I hope that it means I'll have enough time and motivation to work on RMMZ again :)

  5. Note This plugin's available for commercial use Purpose Lets you set some states to reverse the ally/foe identification Games using this plugin None so far Notetag Plugin Calls Video https://www.youtube.com/watch?v=NCVMdR7HFls Prerequisites Abilities: 1. Little Javascript coding proficiency to fully utilize this plugin Terms Of Use You shall keep this plugin's Plugin Info part's contents intact You shalln't claim that this plugin's written by anyone other than DoubleX or his aliases None of the above applies to DoubleX or his/her aliases Changelog DoubleX RMMV Confusion Edit
  6. DoubleX

    DoubleX RMMV Confusion Edit

    Updates * v1.00e(GMT 1100 9-9-2022): * 1. Fixed infinite loop in targetsForReversedExcludeSelf due to typo
  7. Note This plugin works for both RMMV and RMMZ Purpose Lets you extract texts in events/common events/battle events to txt file Video Games using this plugin None so far Parameters Prerequisites Terms Of Use Contributors Changelog Download Link Demo Link
  8. Note This plugin's available for commercial use Purpose Lets users show the battle turn clock, unit and count in battle Games using this plugin None so far Configurations Plugin Calls Video https://www.youtube.com/watch?v=l9-IX16T9Gg Prerequisites Plugins: 1. DoubleX RMMV Popularized ATB Core Abilities: 1. Little Javascript coding proficiency to fully utilize this plugin Terms Of Use You shall keep this plugin's Plugin Info part's contents intact You shalln't claim that this plugin's written by anyone other than DoubleX or his aliases None of the above applies to DoubleX or his/her aliases Changelog Download Link DoubleX RMMV Popularized ATB Clock DoubleX RMMV Popularized ATB Clock v101a.js
  9. Updates * v1.01a(GMT 1200 25-3-2022): * 1. Added the following parameters: * - turn_clock_bar_x * - turn_clock_bar_y * - turn_clock_bar_width * - turn_clock_bar_height * - turn_clock_bar_back_color * - turn_clock_bar_color1 * - turn_clock_bar_color2
  10. We all know that trust is one of the most important aspects in our lives, yet it doesn’t mean everyone can handle it well, and sometimes some people are so bad on it that many problems in their lives are due to that. As far as I know, the maturity of thinking about trust can at least be categorized into following levels - from the most dysfunctional to the most effective and efficient, even though it’s clearly oversimplified and it’s entirely possible that there are even lower and higher levels than those listed below: Level 1 Some people either always completely trust everything or always completely distrust everything without even assessing what that thing is, and we know that’s it’s so unrealistic and overgeneralized that only very immature people won’t see past that. Fortunately, such naive people are extremely rare, and they won’t remain that uneducated for long. For instance, some people do deeply believe that they can only trust themselves and everyone else is always completely untrustworthy while they know next to nothing about any of the others. Needless to say, their lives will be in deep struggles when everyone is so connected to each other and the demands of a person becomes so complex in the modern world that it's incredibly hard to be totally self-sufficient forever. For those who choose to always completely trust everything, they’ll soon fathom the truth that something really can’t be trusted at all, and either they’ll ascend to Level 2, or go to the other extreme - always completely distrust everything, which will cause their lives to be even more miserable. Anyway, they’ll soon see past the false dilemma between trusting everything and distrusting everything, thus it’s very improbable that they won’t ever ascend to Level 2 in their lives. Level 2 Some people are better in that, they know that not everything can be trusted but there are still something trustworthy. However, they still either completely trust something or completely distrust something, and they’ll quickly judge whether something is trustworthy or not, so it’s still a rather black and white thinking that doesn’t work well in reality, and this inflexibility, albeit not uncommon at all, will soon hit such people so hard that they’ll have to relearn what trust really is sooner or later. For instance, those deeply suffering from confirmation bias without even knowing will likely just use the first impression from someone to decide whether that person can be completely trusted or is completely untrustworthy, and will never reevaluate the decision until the reality teaches those people a tough lesson. Of course, judging someone using the first impression alone is judgmental to the extreme, and the negative consequences are so obvious that it’s even hard for those people themselves to look the other way. For the things they completely trust, quite some of them will turn out to be untrustworthy, so those people will turn from completely trusting them to completely distrusting them, and if there are more and more such things, those people will distrust more and more things in their lives, therefore the whole trend will induce them to descend to Level 1 - always completely distrusting everything. On the bright side, because it’s even harder to remain on Level 1 than ascending from Level 2 to Level 3, when those people ascend back to Level 2 after descending to Level 1, they’ll be forced to let go of the false dilemma between complete trust and complete mistrust towards something, and hence ascend to Level 3. Level 3 Some people realize that trust isn’t either all or nothing, but rather a continuous spectrum, so instead of just thinking whether something is trustworthy or not, they’ll also think about how much it can be trusted, so it’ll take them some time to be familiar with something before evaluating how much it can be trusted. Still, it’s a static thinking that misses the key that how much something can be trusted can change over time under certain conditions(unless such signs are so obvious that it’s impossible for such people to miss them), so they’ll be eventually caught completely off guard, even when there were already so many warning signs that they completely missed(because they never have the habit to actively seek for such signs to begin with), and they’d have long decreased the amount of trust if they noticed those signs. For instance, let’s say an employee has worked in a company for a year, and the track record of that employee suggests that, while that employee isn’t trustworthy enough to take the most crucial and difficult tasks, that employee is trustworthy enough to take some other important tasks that are still somewhat challenging but nowhere as difficult. However, because that employee mentally suffers deeply from suddenly becoming single without knowing what’s going on, that employee can no longer function as effectively and efficiently as before, so some of the latest important tasks, which is very similar than those done well by that employee, suddenly failed badly, even though that employee didn’t even try to hide the personal suffering in the company. As the boss decided that that employee is that trustworthy solely based on the performance of that employee over the last year, without even trying to periodically check for any abnormalities from that employee, that boss missed many obvious signs from that personal suffering and assigned those tasks to that employee anyway, hence causing the totally unexpected failure from that employee. While it’s clear that that employee is also at fault and responsible for not actively informing the boss about the personal suffering and its impact of future performance, the point remains that assuming that the trustworthiness of something is a constant to be found can be very dangerous, so unless those people at this level still didn’t grasp the utter horror of even lower levels, they should figure out the problem of not dynamically adjusting the amount of trust of something over time, and thus ascend to Level 4. Level 4 At this point, some people finally understand that trust is actually dynamic rather than static, so dynamic thinking is needed when thinking about trust, meaning that instead of just using past experience and track record to judge how trustworthy something is right now, they’ll also consistently look for both positive and negative signs at the present moment, therefore they can take these present factors of changes of trustworthiness into account as well. Although it’s already pragmatic enough to think about trust this way and not many has already reached this level, the problem is that it’s just reactive rather than proactive, so while they can quickly adjust the amount of trust towards something after those factors of changes have already manifested, they’re still just passively reacting to these signs instead of actively trying to figure out the essence behind those factors. Let’s say there’s another employee who was prolific and reliable at the start slowly became less and less effective and efficient, and eventually dysfunctional altogether due to prolonged covert workplace bullying hidden from the boss. So the boss, noticing this trend without knowing the root cause behind this issue, could only try to ask that employee about the lengthened performance drop, while gradually assigning less and less challenging and important tasks to that employee, with the benefits given to that employee being smaller and smaller, and had to unwillingly fire that employee at the end, because the boss failed to know the truth that way. Although it’s hard to blame the boss for not knowing what’s really going on with that employee when that employee didn’t even try to report anything, it still shows the issues of just passively reacting to the changes of the amount of trustworthiness, as it could’ve been become more rather than less trustworthy if the underlying conditions of changes were revealed. If the boss tried to proactively investigate what makes the employee appear to be less and less trustworthy besides just asking that employee personally, the boss might have discovered the workplace bullying in secret and kept an originally competent and loyal employee, rather than having to reluctantly fire that employee and possibly repeating the history in the future unknowingly. Level 5 This is where proactive thinking comes in, but people of this level are hard to find as it's hard to keep being on this level, and the paradox is that they don’t consciously emphasize trust anymore, because to them hypothetical thinking is much more flexible and responsive when it comes to constantly reassess the essence of the ever changing conditions behind the factors that increases and decreases the amount of trustworthiness towards something. So instead of thinking about how much something can be trusted at any moment, they think about on what probability that how something will behave under what conditions, and the essence behind the when and why of such correlations and causation hold, so once those underlying conditions change, those people can immediately adjust and correct their hypotheses while reconsidering whether and which previously established contingencies need to be executed(or swiftly come up with a backup plan that didn’t exist beforehand), and when they’ll be executed to what extents. For example, an employee formerly working for a rival company has demonstrated extraordinary competence and willingness to take the most challenging and important tasks in the current company without asking anything extra in return, and that employee can get them done all exceptionally well, so the current boss happily give that employee more and more privilege and recognition within the company, and thus that employee can ascend incredibly quickly there, regardless of the fact that that employee was frequently badmouthing the previous employer, which is the rival company. But as what that employee has shown is far too good to be true and the badmouthing of the rival company from that employee doesn’t match what the boss knows about that company, the boss can’t stop to suspect that that employee, which worked for a rival company, is just acting and up to something even bigger, so on one hand the boss appeared to have complete trust over that employee by giving that employee sole discretion over a new and large project that demands access to valuable company secrets to lower the guard of that employee, but on the other hand privately asked a trustworthy security expert in the current company to silently monitor the activities in that project from that employee behind the scenes. It turns out that that employee being suspected is actually an industrial espionage still working for that rival company, and is assigned to elusively install an undetectable backdoor deep inside the new project using internal software systems that can access the most confidential and sensitive algorithms and data of the current company, so the rival company can later invisibly hack into that backdoor to steal those crucial information while keeping a low profile, and the frequent badmouthing from that employee about that rival company is just a cover-up. Hypothetical Thinking While it’s clear that it’d be overkill and exhausting to use hypothetical thinking over trivial matters as well, when it comes to the key moments of determining the trustworthiness of something vital, hypothetical thinking can still come into handy. Also, do note that thinking about trust and hypothetical thinking don’t have to be mutually exclusive, and in fact they can work well together, even though such unison will never be easy nor cheap. Although there are many factors affecting the trustworthiness of someone, usually the most subjective and unclear one is motivation, which can be broken into at least these following 5 basic building blocks: What does that someone need to get right now? What does that someone need to avoid right now? What does that someone want to get right now? What does that someone want to avoid right now? What is the emotional statuses of that someone right now? Other factors, like whether that someone has the experience, knowledge, information, resources and technique to get something done, while still absolutely necessary to determine the trustworthiness of that someone over that something, are usually much more tangible and visible to the others, so if one can reliably comprehend the basic building blocks constituting the motivation of someone, the other factors should also be of little challenge. As long as one can keep in touch with the factors constituting the trustworthiness of someone, that someone will unlikely to suddenly change from very trustworthy to very untrustworthy without being noticed beforehand, so it’s generally hard to back-stab those with hypothetical thinking as their second nature, at least not with them unprepared. When using hypothetical thinking, one doesn’t just come up with a single hypothesis and call it a day, but should instead explore at least several ones that are reasonably likely to warrant further verification, and act on the currently most probable one, with contingencies designated to handle cases when that hypothesis is proven to be wrong, until it’s proven to be wrong and act on another hypothesis. Do note that besides having to actively and consistently look for signs both supporting and negating the hypotheses, while a hypothesis can be the most probable one right now, after some time with some changes, another hypothesis can become the most probable one later, and sometimes one even needs to generate new hypotheses on the fly, so the whole hypothetical thinking is a constantly dynamic process, and there should be as few unsupported assumptions as possible at any moment. Of course, it’s impossible to be even near perfect, so no matter how experienced, knowledgeable and skillful in practicing hypothetical thinking in real lives, there will always be times where one will be caught completely off guard, so hypothetical thinking isn’t about trying to eliminate uncertainty and the concept of trust altogether, but rather minimize the amount of uncertainty and reliance of trust while accepting that uncertainty is a nature of lives and trust that one can deal with most of the remaining uncertainty most of the time. Because of that, those practicing hypothetical thinking should also be ready to be completely caught off guard, and that means they’ll have to be able to be very flexible, spontaneous and versatile at any time, even though one will have to take very complicated and convoluted paths to get there. Combining everything, those with hypothetical thinking in mind first observe and test someone for a while, then act on a hypothesis based on the initial track record of that someone collected during that period, and those people will continue to look for signs that indicate both the increase and decrease the trustworthiness of that someone. If the hypothesis suggests that some such positive or negative signs will manifest and they mostly do, then the hypothesis is somehow verified and should be kept, otherwise it’s shown to be less and less accurate and should be tweaked somehow, and when there are enough such significant mismatches, the hypothesis will be proven to be dead wrong so those people will have to act on a new hypothesis, and the whole cycle will repeat again and again.
  11. This topic aims to share the basic knowledge on what the default RMMZ TPBS battle flow implementations do in general, but you're still assumed to have at least: 1. Some plugin development proficiency(having written several easy, simple and small battle-related plugins up to 1k LoC scale) 2. Basic knowledge on what the default RMMZ turn based battle flow implementations do in general 3. Basic knowledge on what the default RMMZ TPBS battle flow does in general on the user level(At least you need to know what's going on on the surface when playing it as a player) Simplified Flowchart Please note that this flowchart only includes the most important elements of the battle flow, to avoid making it too complicated and convoluted for the intended targeting audience Battle Start Input Action Slots Thinking In Frames Frame Start Start Phase Turn Phase Action Phase Turn End Phase Battle End Phase Update TPB Input Summary That's all for now. I hope this can help you grasp these basic knowledge. For those thoroughly comprehending the essence of the default RMMZ TPBS battle flow implementations, feel free to correct me if there's anything wrong For those wanting to have a solid understanding to the default RMMZ TPBS battle flow implementations, I might open a more advanced topic for that later
  12. This topic aims to share the basic knowledge on what the default RMMZ turn based battle flow implementations do in general, but you're still assumed to have at least: 1. Little javascript coding proficiency(barely okay with writing rudimentary Javascript codes up to 300 LoC scale) 2. Basic knowledge on what the default RMMZ turn based battle flow does on the user level(At least you need to know what's going on on the surface when playing it as a player) Simplified Flowchart Please note that this flowchart only includes the most important elements of the battle flow, to avoid making it too complicated and convoluted for the intended targeting audience Start Battle Input Actions Process Turns Execute Actions Summary That's all for now. I hope this can help you grasp these basic knowledge. For those thoroughly comprehending the essence of the default RMMZ turn based battle flow implementations, feel free to correct me if there's anything wrong For those wanting to have a solid understanding to the default RMMZ turn based battle flow implementations, I might open a more advanced topic for that later
  13. Outsourcing business functions is nothing new in the business world, and is actually a very common and well-established practice, although whether it's a wise decision depends on the concrete circumstances. On the other hand, outsourcing core business functions is generally quite dangerous, because you can end up falling into a rather disadvantageous scenario, as demonstrated in the following example. Situation A company has its headquarter in a very well-developed city(so its various expenditures will be much higher), and 3 branches in 3 different other much, much less developed cities(so their various expenditures will be much lower) all being very far away from the headquarter, where each branch runs a core business function outsourced by the headquarter: Branch A runs most of the back end for the core business of the whole company Branch B runs the 24 hour call center and most of the customer support services for the whole company Branch C runs most of the software testing and sales activities for the whole company Of course, each branch is also responsible for expanding the markets in their respective cities, and they're allowed to take a large portion of the profits from those markets by running their own business, in order to motivate and reward them for more effective and efficient market expansion(they'll also take a small portion of the profits from the core business of the headquarter so the branch will still have the incentive to keep the core business running). Back to the headquarter, it runs most of the front end for the core business of the whole company(and takes a large portion of profits from the core business), and is responsible for finding new customers and maintaining existing ones, even though the headquarter will also take a small portion of the profits from the business owned by its branches. This seems to be a decent setup that can significantly lower long-term expenditures and raise overall profits, but actually there's a big problem: The headquarter will likely have less and less control over its branches, which will be more and more powerful due to its own businesses growing over time. Problem So, if you were the head of a branch, and you can frequently pretend to obey the headquarter while actually ignoring its orders, will you focus primarily on the core business of the headquarter, or the business owned by the branch? Needless to say, you'll choose to work on the latter most of the time, and will only work on the former when it delays too much, because you can take a large portion of the profits from the latter but only a small portion of those from the former, and not working on the latter will mainly hurt your branch while not working on the former will mainly hurt the headquarter. As time passes, the branches will become more and more independent from the headquarter, because they'll rely on more and more on their own business and less and less on the core business of the headquarter, whereas the headquarter will become harder and harder to control the branches, because its situation will become more and more dire while those of those branches will become better and better. By the time the business owned by those branches become mature enough for those branches to totally ignore the core business of the headquarter, it's when the headquarter will be forced to submit into those branches, because the headquarter still needs those branches to keep its core business running, and now it's already too late to try to take back control from those branches or migrate those outsourced core business functions from those branches to somewhere else(or just taking them back and let the headquarter run all those functions itself). Normally, the headquarter should be the one controlling its branches but not the other way around, however the control can indeed be reversed if the headquarter does outsource its core business function into its branches, so how to prevent that from happening in the first place? Solution The simplest solution is, of course, never outsourcing core business functions to begin with, but sometimes it has to be done to keep the expenditures low enough by utilizing resources in less developed cities, therefore some other ways have to be found to somehow even out the odds. In the short term, when a branch's just established, the headquarter should find the most trustworthy ones to run its branches in the first place, and they have to be almost absolutely trustworthy for a long time(whether it's because they've such high integrity or the headquarter has their key weaknesses on its hands), to ensure that they won't betray the headquarter so easily even when their self-interests will be more and more inclined to do so. In the medium term, when a branch becomes able to take care of itself, a system should be implemented to mitigate the potential conflicts of interests among the headquarters and its branches, like when a branch starts to ignore the core business function outsourced from the headquarter, the branch should take a larger and larger portion of the profits from the core business of the headquarter and smaller and smaller portion of those from the business owned by that branch, and when it's more important to expand the market assigned by that branch, the opposite adjustment should be made accordingly, so the branch will be more rewarded for focusing on what it should focus on at any moment. In the long term, when a branch starts to intend to become independent, the core business function outsourced to it should also be outsourced to a new branch that is far from being able to stand on its own feet, so the headquarter won't have to totally rely on the former branch(albeit the whole mitigation process can take years), probably even at the hefty cost of having to open a new branch, which would be also responsible for opening yet another market. Of course, even these measures won't last forever, because eventually the headquarter can have so many branches(even when some of them will be sub-branches of other branches) that it won't be able to control anymore, but at least the risk of outsourcing core business function won't be as unmanageable as before, and nearly no company can last forever anyway. Evaluation So far it's all about outsourcing core business functions from the headquarter to its branches, but how about outsourcing them to foreign and popular companies(with excellent reputation) specialized in such functions? It really depends on the functions and the companies planning to have them outsourced, like outsourcing a crucial database to companies running database centers is already quite different from outsourcing a 24 hour call center to respective companies, because different functions have different risks associated to them, and their respective companies can have different reasons to go against your best interests. For instance, while outsourcing a crucial database to a normally good company is usually wise, that company can also be interested by powerful and resourceful hackers(due to its high popularity and excellent reputation), so that company can be more prone to be targeted by sophisticated attacks, therefore although that company should also have quite a good defense against such attacks, once an attack succeeded, the database being outsourced can become totally compromised. Whereas outsourcing a core business function to an unknown company in a foreign country can have some other risks, like asking one such company to write the cross-platform front-end of a mobile app for you, and you can end up being effectively blackmailed by that company, perhaps the app will be stable at the beginning of production but have more and more bugs later on(so you'll have to pay more and more money to ask it to fix the bugs), and perhaps you can even end up having to give it a hefty sum so you can take back the codebase of that front-end and fix all those artificially created bugs yourself(of course you'll also have to hire some new employees to do that). On the other hand, just because outsourcing a core business function can be dangerous, it doesn't mean one should never do so, because sometimes the resource and technical requirements for running that function can be much higher than what a company possesses in the foreseeable future, and this restriction alone shouldn't always mean a company shouldn't even have a try on such function, it's just that outsourcing a core business function, no matter how big and strong a company is, should be a very serious(and perhaps irreversible) decision that can never be taken lightly.
  14. Note This plugin's available for commercial use Purpose Fixes DoubleX RMMV Status Bars compatibility issues Games using this plugin None so far Addressed Plugins Prerequisites Plugins: DoubleX RMMV Status Bars Abilities: 1. Nothing special Terms Of Use Changelog Download Link DoubleX RMMV Status Bars Compatibility
  15. Descriptions The following image briefly outlines the core structure of this whole idea, which is based on the idea of applying purely server-side rendering on games: https://github.com/Double-X/Image-List/blob/master/Future%20MP%20Games%20Architecture.png Note that the client side should have next to no game state or data, nor audio/visual assets, as they're supposed to never leave the server side. The following's the general flow of games using this architecture(all these happen per frame): 1. The players start running the game with the client IO 2. The players setup input configurations(keyboard mapping, mouse sensitivity, mouse acceleration, etc), graphics configurations(resolution, fps, gamma, etc), client configurations(player name, player skin, other preferences not impacting gameplay, etc), and anything that only the players can have information of 3. The players connect to servers 4. The players send all those configurations and settings to the servers(those details will be sent again if players changed them during the game within the same servers) 5. The players makes raw inputs(like keyboard presses, mouse clicks, etc) as they play the game 6. The client IO captures those raw player inputs and sends them to the server IO(but there's never any game data/state synchronization among them) 7. The server IO combines those raw player inputs and the player input configurations for each player to form commands that the game can understand 8. Those game commands generated by all players in the server will update the current game state set 9. The game polls the updated current game state set to form the new camera data for each player 10. The game combines the camera data with the player graphics configurations to generate the rendered graphics markups(with all relevant audio/visual assets used entirely in this step) which are highly compressed and obfuscated and have the least amount of game state information possible 11. The server IO captures the rendered graphics markups and send them to the client IO of each player(and nothing else will ever be sent in this direction) 12. The client IO draws the fully rendered graphics markups(without needing nor knowing any audio/visual asset) on the game screen visible by each player The aforementioned flow can also be represented this way: https://github.com/Double-X/Image-List/blob/master/Future%20MP%20Games%20Architecture%20Flow.png Differences From Cloud Gaming Do note that it's different from cloud gaming in the case of multiplayer(although it's effectively the same in the case of single player), because cloud gaming doesn't demand the games to be specifically designed for that, while this architecture does, and the difference means that: 1. In cloud gaming, different players rent different remote machines, each hosting the traditional client side of the game, which communicates with the traditional server side of the game in the same real server that's distinct from those middlemen devices, meaning that there will be at most 2 round trips per frame(between the client and the remote machine, and between the remote machine and the real server), so if the remote machines isn't physically close to the real server, and the players aren't physically close to the remote machines, the latency can raise to an absurd level 2. This architecture forces games complying with it to be designed differently from the traditional counterparts right from the start, so it can install the client version(having minimal contents) directly into the device for each player, which directly communicates with the server side of the game in the same server(which has almost everything), thus removing the need of a remote machine per player as the middleman, and hence the problems created by it(latency and the setup/maintenance cost from those remote machines) 3. The full cycle of the communications in cloud gaming is the following: - The player machines send the raw input commands to the remote machines - The remote machines convert those commands into new game states of the client side of the game there - The client side of the game in those remote machines synchronize with the server side of the game in the real server - The remote machines draw new visuals on their screens and play new audios based on the latest game states on the client side of the game there - The remote machines send those audio and visual information to the player machines - The player machines redraw those new audios and visuals there 4. The full cycle of the communications of this architecture is the following: - The player machines send the raw input commands directly to the real server - The real server convert those commands into the new game states of the server side of the game there - The real server send new audio and visual information to the player machines based on the involved parts of the latest game states on the server side of the game there - The player machines draw those new audios and visuals there 3 + 4 means the rendering actually happens 2 times in cloud gaming - 1 in the remote machines and 1 in the player machines, while the same happens just once in this architecture - just the player machines directly, and the redundant rendering in cloud gaming can contribute quite a lot to the end latency experienced by players, so this is another advantage of this architecture over cloud gaming. In short, cloud gaming supports games not having cloud gaming in mind(and is thus backward compatible) but can suffer from insane latency and increased business costs(which will be transferred to players), while this architecture only supports games targeting it specifically(and is thus not backward compatible) but removes quite some pains from the remote machine in cloud gaming(this architecture also has some other advantages over cloud gaming, but they’ll be covered in the next section). On a side note: If some cloud gaming platforms don't let their players to join servers outside of them, while it'd remove the issue of having 3 entities instead of just 2 in the connection, it'd also be more restrictive than this architecture, because the latter only restricts all players to play the same game using it. Advantages The advantages of this architecture at least include the following: 1. The game requirements on the client side can be a lot lower than the traditional architecture(although cloud gaming also has this advantage), as now all the client side does is sending the captured raw player inputs(keyboard presses, mouse clicks, etc) to the server side, and draws the received rendered graphics markup(without using any audio/visual assets in this step and the client side doesn't have any of them anyway) on the game screen visible by each player 2. Cheating will become next to impossible(cloud gaming may or may not have this advantage), as all cheats are based on game information, and even the state of the art machine vision still can't retrieve all the information needed for cheating within a frame(even if it just needs 0.5 seconds to do so, it's already too late in the case of professional FPS E-Sports, not to mention that the rendered graphics markup can change per frame, making machine vision even harder to work well there), and it'd be a epoch-making breakthrough on machine vision if the cheats can indeed generate the correct raw player inputs per frame(especially when the rendered graphics markups are highly obfuscated), which is definitely doing way more good than harm to the mankind, so games using this architecture can actually help pushing the machine vision researches 3. Game piracy and plagiarisms will become a lot more costly and difficult(cloud gaming may or may not have this advantage), as the majority of the game contents and files never leave the servers, meaning that those servers will have to be hacked first before those pirates can crack those games, and hacking a server with the very top-notch security(perhaps monitored by network and server security experts as well) is a very serious business that not many will even have a chance 4. Game data and state synchronization should no longer be an issue(while cloud gaming won't have this advantage), because the client side should've nearly no game data and state, meaning that there should be nothing to synchronize with, thus this setup not only removes tons of game data/state integrity troubles and network issues, but also deliberate or accidental exploits like lag switching(so servers no longer has to kick players with legitimately high latency because those players won't have any advantage anymore, due to the fact that such exploits would just cause the users to become inactive for a very short time per lag in the server, thus they'd be the only ones being under disadvantages) Disadvantages The disadvantages of this architecture at least include the following: 1. The game requirements and the maintenance cost on the server side will become ridiculous - perhaps a supercomputer, computer cluster, or a computer cloud will be needed for each server, and I just don't know how it'll even be feasible for MMO to use this architecture in the foreseeable future 2. The network traffic in this architecture will be absurdly high, because all players are sending raw input to the same server, which sends back the rendered graphics markup to each player(even though it's already highly compressed), all happening per frame, meaning that this can lead to serious connection issues with servers having low capacity and/or players with low connection speed/limited network data usage 3. The rendered graphics markup needs to be totally lossless in terms of visual qualities on one hand, otherwise it'd be a bane for games needing the state of the art graphics; It also needs to be highly compressed and obfuscated on the other, because the network traffic must be minimized and the markup needs to defend against cheats. These mean it'd be extremely hard to properly implement the rendered graphics markup, let alone without creating new problems 4. The inherent network latency due to the physical distance between the clients and the servers will be even more severe, because now the client has to communicate with the server per frame, meaning that the servers must be physically located nearby the players, and thus many servers across many different cities will be needed How Disadvantages Diminish Over Time Clearly, the advantages from this architecture will be unprecedented if the architecture itself can ever be realized, while its disadvantages are all hardware and technical limitations that will become less and less significant, and will eventually become trivial. So while this architecture won't be the reality in the foreseeable future(at least several years from now), I still believe that it'll be the distant future(probably in terms of decades). For instance, let's say a player joins a server being 300km away from his/her device(which is a bit far away already) to play a game with a 1080p@120Hz setup using this architecture, and the full latency would have to meet the following requirements in order to have everything done within around 9ms, which is a bit more than the maximum time allowed in 120 FPS: The client will take around 1ms to capture and start sending the raw input commands from the player The minimum ping, which is limited by the speed of light, will be 2 * 300km / 300,000km per second = around 2ms The server will take around 1ms to receive and combine all raw input commands from all players The server will take around 1ms to convert the current game state set with those raw input commands to form the new game state set The server will take around 1ms to generate all rendered graphics markups(which are lossless, highly compressed and highly obfuscated) from the new camera state of all players The server will take around 1ms to start sending those rendered graphics markups to all players The client will take around 1ms to receive and decompress the rendered graphics markup of the corresponding player The client will take around 1ms to render the decompressed rendered graphics markup as the end result being perceived by the player directly Do note that hardware limitations, like mouse and keyboard polling rate, as well as monitor response time, are ignored, because they'll always be there regardless of how a multiplayer game is designed and played. Of course, the above numbers are just outright impossible within years, especially when there are dozens of players in the same server, but they should become something very real after a decade or 2, because by then the hardware we've should be much, much more powerful than those right now. Similarly, for a 1080p@120Hz setup, if the rendering is lossless but isn't compressed at all, it'd need (1920 * 1080) pixels * 32 bit * 120 FPS + little bandwidth from raw inputting commands sent to the server = Around 1GB/s per player, which is of course insane to the extreme right now, and the numbers for 4K@240Hz and 8K@480Hz(assuming that it'll or is always a real thing) setups will be around 8GB/s and 64GB/s per player respectively, which are just incredibly ridiculous in the foreseeable future. However, as the rendering markups sent to the client should be highly compressed, the actual numbers shouldn't be this large, and even if the rendering isn't compressed at all, in the distinct future, when 6G, or even newer generations, become the new norm, these numbers, while will still be quite something, should become practical enough in everyday gaming, and not just for enthusiasts. Nevertheless, there might be an absolute limit on the screen resolution and/or FPS that can be supported by this architecture no matter how powerful the hardware is, so while I think this architecture will be the distinct future(like after a decade or 2), it probably won't be the only way multiplayer games being written and played, because the other models still have their values even by then. Future Implications If this architecture becomes the practical mainstream, the following will be at least some of the implications: 1. The direct one time price of the games, and also the indirect one(the need to upgrade the client machine to play those games) will be noticeably lower, as the games are much less demanding on the client side(drawing an already rendered graphics markup, especially without needing any audio nor visual assets, is generally a much, much easier, simpler and smaller task than generating that markup itself, and the client side hosts almost no game data nor state so the hard disk space and memory required will also be a lot lower) 2. The periodic subscription fee will exist in more and more games, and those already having such fee will likely increase the fee, in order to compensate for the increasing game maintenance cost from upgraded servers(these maintenance cost increments will eventually be cancelled out by hardware improvements causing the same hardware to become cheaper and cheaper) 3. The focus of companies previously making high end client CPU, GPU, RAM, hard disk, motherboard, etc will gradually shift their business into making server counterparts, because the demands of high end hardware will be relatively smaller and smaller on the client side, but will be relatively larger and larger on the server side 4. The demands of high end servers will be higher and higher, not just from game companies, but also for some players investing a lot into those games, because they'd have the incentive to build some such servers themselves, then either use them to host some games, or rent those servers to others who do Anti-Cheating In the case of highly competitive E-Sports, the server can even implement some kind of fuzzy logic, which is fine-tuned with a deep learning AI, to help report suspicious raw player input sets(consisted of keyboard presses, mouse clicks, etc) with a rating on how suspicious it is, which can be further broken down to more detailed components on why they're that suspicious. This can only be done effectively and efficiently if the server has direct access to the raw player input set, which is one of the cornerstones of this very architecture. Combining this with traditional anti cheat measures, like having a server with the highest security level, an in-game admin having server level access to monitor all players in the server(now with the aid of the AI reporting suspicious raw player input sets for each player), another admin for each team/side to monitor player activities, a camera for each player, and thoroughly inspected player hardware, it'll not only make cheating next to impossible in major LAN events(also being cut off from external connections), but also so obviously infeasible and unrealistic that almost everyone will agree that cheating is indeed nearly impossible there, thus drastically increasing their confidence on the match fairness. Hybrid Models Of course, games can also use a hybrid model, and this especially applies to multiplayer games also having single player modes. If the games support single player, of course the client side needs to have everything(and the piracy/plagiarism issues will be back), it's just that most of them won't be used in multiplayer if this architecture's used. If the games runs on the multiplayer, the hosting server can choose(before hosting the game) whether this architecture's used(of course, only players with the full client side package can join servers using the traditional counterpart, and only players with the server side subscription can join servers using this architecture). Alternatively, players can choose to play single player modes with a server for each player, and those servers are provided by the game company, causing players to be able to play otherwise extremely demanding games with a low-end machine(of course the players will need to apply for the periodic subscriptions to have access of this kind of single player modes). On the business side, it means such games will have a client side package, with a one time price for everything in the client side, and a server side package, with a periodic subscription for being able to play multiplayer, and single player with a dedicated server provided, then the players can buy either one, or both, depending on their needs and wants. This hybrid model, if both technically and economically feasible, is perhaps the best model I can think of.
  16. Note This plugin's available for commercial use Purpose Fixes DoubleX RMMV Popularized ATB compatibility issues Games using this plugin None so far Action Sequences Addressed Plugins Video https://www.youtube.com/watch?v=aoBI3DaE3g8 Prerequisites Plugins: 1. DoubleX RMMV Popularized ATB Core Abilities: 1. Nothing special Instructions Place this plugin below all DoubleX RMMV Popularized ATB addons Terms Of Use You shall keep this plugin's Plugin Info part's contents intact You shalln't claim that this plugin's written by anyone other than DoubleX or his aliases None of the above applies to DoubleX or his/her aliases Changelog Download Link DoubleX RMMV Popularized ATB Compatibility DoubleX RMMV Popularized ATB Compatibility v104a.js
  17. Updates * v1.04a(GMT 0500 1-1-2022): * 1. Compatible With Yanfly Engine Plugins - Battle Engine Extension - * Animated Sideview Enemies Please note that using this with Yanfly's animated sideview enemies might cause minor performance issues on low-end mobiles
  18. Just bought myself a Galax RTX 2060 12GB, and soon I can feel its power :)

  19. Unfortunately, I failed to reproduce the issue, so would you mind sending me your project via pm?
  20. Patches (DoubleX)ECATB Base Formula(Possibly hurts code performance but frees you from copying the same <ecatb rate: RX> to every battler) Note Introduction Purpose Be an enhanced version of YSA-CATB with bug fixes and addons integrated Features Possibly upcoming features Games using this script None so far Compatibility Fix DoubleX RMVXA Enhanced YSA Battle System: Classical ATB Compatibility Screenshots Video https://www.youtube.com/watch?v=E692R6s8F0I https://www.youtube.com/watch?v=6E0-X0wbLAM Demo Coming Soon Prerequisites Terms Of Use Instructions Author's Notes FAQ Authors Changelog Download Link DoubleX RMVXA Enhanced YSA Battle System: Classical ATB
  21. Updates # v0.05c(GMT 0600 28-11-2021): | # 1. Fixed wrong eval of ecatb_battler_scale and uninitialized battler turn | # bugs |
  22. It seems to me that that Yanfly plugin will just mirror everything that's attached to that sprite, as long as that sprite itself's to be mirrored, meaning that the same problem would occur if another plugin attach a HP/MP/TP bar to it. Regardless, I'll still look into this issue and find a way to fix it
  23. Updates * v1.03f(GMT 0700 23-6-2021): * 1. Fixed the visuals of the action sequences of actor sprites being * reset when other actors are inputable bug
  24. Let's imagine that the job of a harvester is to use an axe to harvest trees, and the axe will deteriorate over time. Assuming that the following's the expected performance of the axe: Fully sharp axe(extremely excellent effectiveness and efficiency; ideal defect rates) - 1 tree cut / hour 1 / 20 chance for the tree being cut to be defective(with 0 extra decent tree to be cut for compensation as compensating trees due to negligible damages caused by defects) Expected number of normal trees / tree cut = (20 - 1 = 19) / 20 Becomes a somehow sharp axe after 20 trees cut(a fully sharp axe will become a somehow sharp axe rather quickly) Somehow sharp axe(reasonably high effectiveness and efficiency; acceptable defect rates) - 1 tree cut / 2 hours 1 / 15 chance for the tree being cut to be defective(with 1 extra decent tree to be cut for compensation as compensating trees due to nontrivial but small damages caused by defects) Expected number of normal trees / tree cut = (15 - 1 - 1 = 13) / 15 Becomes a somehow dull axe after 80 trees cut(a somehow sharp axe will usually be much more resistant on having its sharpness reduced per tree cut than that of a fully sharp axe) Needs 36 hours of sharpening to become a fully sharp axe(no trees cut during the atomic process) Somehow dull axe(barely tolerable effectiveness and efficiency; alarming defect rates) - 1 tree cut / 4 hours 1 / 10 chance for the tree being cut to be defective(with 2 extra decent trees to be cut for compensation as compensating trees due to moderate but manageable damages caused by defects) Expected number of normal trees / tree cut = (10 - 1 - 2 = 7) / 10 Becomes a fully dull axe after 40 trees cut(a somehow dull axe is just ineffective and inefficient but a fully dull axe is significantly dangerous to use when cutting trees) Needs 12 hours of sharpening to become a somehow sharp axe(no trees cut during the atomic process) Fully dull axe(ridiculously poor effectiveness and efficiency; obscene defect rates) - 1 tree cut / 8 hours 1 / 5 chance for the tree being cut to be defective(with 3 extra decent trees to be cut for compensation as compensating trees due to severe but partially recoverable damages caused by defects) Expected number of normal trees / tree cut = (5 - 1 - 3 = 1) / 5 Becomes an irreversibly broken axe(way beyond repair) after 160 trees cut The harvester will resign if the axe keep being fully dull for 320 hours(no one will be willing to work that dangerously forever) Needs 24 hours of sharpening to become a somehow dull axe(no trees cut during the atomic process) Now, let's try to come up with some possible work schedules: Sharpens the axe to be fully sharp as soon as it becomes somehow sharp - Expected to have 19 normal trees and 1 defective tree cut after 1 * (19 + 1) = 20 hours(simplifying "1 / 20 chance for the tree being cut to be defective" to be "1 defective tree / 20 trees cut") Expected the axe to become somehow sharp now, and become fully sharp again after 36 hours Expected long term throughput to be 19 normal trees / (20 + 36 = 56) hours(around 33.9%) Sharpens the axe to be somehow sharp as soon as it becomes somehow dull - The initial phase of having the axe being fully sharp's skipped as it won't be repeated Expected to have 68 normal trees, 6 defective trees, and 6 compensating trees cut after 2 * (68 + 6 + 6) = 160 hours(simplifying "1 / 15 chance for the tree being cut to be defective" to be "1 defective tree / 15 trees cut" and using the worst case) Expected the axe to become somehow dull now, and become somehow sharp again after 12 hours Expected long term throughput to be 68 normal trees / (160 + 12 = 172) hours(around 39.5%) Sharpens the axe to be somehow dull as soon as it becomes fully dull - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") Expected the axe to become fully dull now, and become somehow dull again after 24 hours Expected long term throughput to be 28 normal trees / (160 + 24 = 184) hours(around 15.2%) Sharpens the axe to be somehow dull right before the harvester will resign - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") when the axe's somehow dull Expected the axe to become fully dull now, and expected to have 4 normal trees, 8 defective trees, and 24 compensating trees but after 8 * (4 + 8 + 24) = 288 hours(simplifying "1 / 5 chance for the tree being cut to be defective" to be "1 defective tree / 5 trees cut" and using the worst case) when the axe's fully dull Expected total number of normal trees to be 28 + 4 = 32 Expected the axe to become somehow dull again after 24 hours(so the axe remained fully dull for 288 + 24 = 312 hours, which is the maximum before the harvester will resign) Expected long term throughput to be 32 normal trees / (160 + 312 = 472) hours(around 6.7%) Sharpens the axe to be fully sharp as soon as it becomes somehow dull - Expected total number of normal trees to be 19 + 68 = 87 Expected total number of hours to be 56 + 172 = 228 hours Expected long term throughput to be 87 normal trees / 228 hours(around 38.2%) Sharpens the axe to be fully sharp as soon as it becomes fully dull - Expected total number of normal trees to be 19 + 68 + 28 = 115 Expected total number of hours to be 56 + 172 + 184 = 412 hours Expected long term throughput to be 115 normal trees / 412 hours(around 27.9%) Sharpens the axe to be fully sharp right before the harvester will resign - Expected total number of normal trees to be 19 + 68 + 32 = 119 Expected total number of hours to be 56 + 172 + 472 = 700 hours Expected long term throughput to be 119 normal trees / 700 hours(17%) Sharpens the axe to be somehow sharp as soon as it becomes fully dull - Expected total number of normal trees to be 68 + 28 = 96 Expected total number of hours to be 172 + 184 = 356 hours Expected long term throughput to be 96 normal trees / 356 hours(around 26.9%) Sharpens the axe to be somehow sharp right before the harvester will resign - Expected total number of normal trees to be 68 + 32 = 100 Expected total number of hours to be 172 + 472 = 644 hours Expected long term throughput to be 100 normal trees / 644 hours(around 15.5%) So, while these work schedules clearly show that sharpening the axe's important to maintain effective and efficient long term throughput, trying to keep it to be always fully sharp is certainly going overboard(33.9% throughput), when being somehow sharp is already enough(39.5% throughput). Then why some bosses don't let the harvester sharpen the axe even when it's somehow or even fully dull? Because sometimes, a certain amount of normal trees have to be acquired within a set amount of time. Let's say that the axe has become from fully sharp to just somehow dull, so there should be 87 normal trees cut after 180 hours, netting the short term throughput of 48.3%. But then some emergencies just come, and 3 extra normal trees need to be delivered within 16 hours no matter what, whereas compensating trees can be delivered later in the case of having defective trees. In this case, there won't be enough time to sharpen the axe to be even just somehow sharp, because even in the best case, it'd cost 12 + 2 * 3 = 18 hours. On the other hand, even if there's 1 defective tree from using the somehow dull axe within that 16 hours, the harvester will still barely make it on time, because the chance of having 2 defective trees is (1 / 10) ^ 2 = 1 / 100, which is low enough to be neglected for now, and as compensatory trees can be delivered later even if there's 1 defective tree, the harvester will be able to deliver 3 normal trees. In reality, crunch modes like this will happen occasionally, and most harvesters will likely understand that it's probably inevitable eventually, so as long as these crunch modes won't last for too long, it's still practical to work under such circumstances once in a while, because it's just being reasonably pragmatic. However, in supposedly exceptional cases, the situation's so extreme that, when the harvester's about to sharpen the axe, the boss constantly requests that another tree must be acquired as soon as possible, causing the harvester to never have time to sharpen the axe for a long time, thus having to work more and more ineffectively and inefficiently in the long term. In the case of a somehow dull axe, 12 hours are needed to sharpen it to be somehow sharp, whereas another tree's expected to be acquired within 4 hours, because the chance of having a defective tree cut is 1 / 10, which can be considered small enough to take the risk, and the expected number of normal trees over all trees being cut is 7 of out 10 for a somehow dull axe, whereas 12 hours is enough to cut 3 trees by using such an axe, so at least 2 normal trees can be expected within this period. If this continues, eventually the axe will become fully dull, and 24 hours will be needed to sharpen it to be somehow dull, whereas another tree's expected to be acquired within 8 hours, because the chance of having a defective tree is 1 / 5, which can still be considered controllable to take the risk, especially with an optimistic estimation. While the expected number of normal trees over all trees being cut is 1 of out 5 for a fully dull axe, whereas 24 hours is just enough to cut 3 trees by using such an axe, meaning that the harvester's not expected to make it normally, in practice, the boss will usually unknowingly apply optimism bias(at least until it no longer works) by thinking that there will be no defective trees when just another tree's to be cut, so the harvester will still be forced to continue cutting trees, despite the fact that the axe should be sharpened as soon as possible even when just considering the short term. Also, if the boss can readily replace the current harvester with a new one immediately, the boss will rather let the current harvester resign than letting that harvester sharpening the axe to be at least somehow dull, because to the boss, it's always emergencies after emergencies, meaning that the short term's constantly so dire that there's just no room to even consider the long term at all. But why such an undesirable situation will be reached? Other than extreme and rare misfortunes, it's usually due to overly optimistic work schedules not seriously taking the existence of defective and compensatory trees, and the importance of the sharpness of the axe and the need of sharpening the axe into the account, meaning that such unrealistic work schedules are essentially linear(e.g.: if one can cut 10 trees on day one, then he/she can cut 1000 trees on day 100), which is obviously simplistic to the extreme. Occasionally, it can also be because of the inherent risks of sharpening the axe - Sometimes the axe won't be actually sharpened after spending 12, 24 or 36 hours, and while it's extraordinary, the axe might be actually even more dull than before, and most importantly, usually the boss can't directly judge the sharpness of the axe, meaning that it's generally hard for that boss to judge the ROI of sharpening the axe with various sharpness before sharpening, and it's only normal for the boss to distrust what can't be measured objectively by him/herself(on the other hand, normal, defective and compensatory trees are objectively measurable, so the boss will of course emphasize on these KPIs), especially for those having been opting for linear thinking. Of course, the whole axe cutting tree model is highly simplified, at least because: The axe sharpness deterioration isn't a step-wise function(an axe becomes from having a discrete level of sharpness to another such level after cutting a set number of trees), but rather a continuous one(gradual degrading over time) with some variations on the number of trees cut, meaning that when to sharpen the axe in the real world isn't as clear cut as that in the aforementioned model(usually it's when the harvester starts feeling the pain, ineffectiveness and inefficiency of using the axe due to unsatisfactory sharpness, and these feeling has last for a while) Not all normal trees are equal, not all defective trees are equal, and not all compensatory trees are equal(these complications are intentionally simplified in this model because these complexities are hardly measurable) The whole model doesn't take the morale of the harvester into account, except the obvious one that that harvester will resign for using a fully dull axe for too long(but the importance of sharpening the axe will only increase if morale has to be considered as well) In some cases, even when the axe's not fully dull, it's already impossible to sharpen it to be fully or even just somehow sharp(and in really extreme cases, the whole axe can just suddenly break altogether for no apparent reason) Nevertheless, this model should still serve its purpose of making this point across - There's isn't always an universal answer to when to sharpen the axe to reach which level of sharpness, because these questions involve calculations of concrete details(including those critical parts that can't be quantified) on a case-by-case basis, but the point remains that the importance of sharpening the axe should never be underestimated. When it comes to professional software engineering: The normal trees are like needed features that work well enough The defective trees are like nontrivial bugs that must be fixed as soon as possible(In general, the worse the code quality of the codebase is, the higher the chance to produce more bugs, produce bugs being more severe, and the more the time's needed to fix each bug with the same severity - More severe bugs generally cost more efforts to fix in the same codebase) The compensatory trees are like extra outputs for fixing those bugs and repairing the damages caused by them The axe is like the codebase that's supposed to deliver the needed features(actually, the axe can also be like those software engineers themselves, when the topic involved is software engineering team management rather than just refactoring) Sharpening the axe is like refactoring(or in the case of the axe referring to software engineers, sharpening the axe can be like letting them to have some vacations to recover from burnouts) A fully sharp axe is like a codebase suffering from the gold plating anti pattern on the code quality aspect(diminishing returns applies to code qualities as well), as if those professional software engineers can't even withstand a tiny amount of technical debt. On the good side, such an ideal codebase is the most unlikely to produce nontrivial bugs, and even when it does, they're most likely fixed with almost no extra efforts needed, because they're usually found way before going into production, and the test suite will point straight to their root causes. A somehow sharp axe is like a codebase with more than satisfactory code qualities, but not to the point of investing too much on this regard(and the technical debt is still doing more good than harm due to its amount under moderation). Such a practically good codebase is still a bit unlikely to produce nontrivial bugs regularly, but it does have a small chance to let some of them leak into production, causing a mild amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A somehow dull axe is like a codebase with undesirable code qualities, but it's still an indeed workable codebase(although it's still quite painful to work with) with a worrying yet payable amount of technical debt. Undesirable yet working codebases like this probably has a significant chance to produce nontrivial bugs frequently, and a significant chance for quite some of them to leak into production, causing a rather significant amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A fully dull axe is like a unworkable codebase where it must be refactored as soon as possible, because even senior professional software engineers can easily create more severe bugs than needed features with such a codebase(actually they'll be more and more inclined to rewrite the codebase the longer it's not refactored), causing their productivity to be even negative in the worst cases. An effectively broken codebase like this is guaranteed to has a huge chance to produce nontrivial bugs all the time, and nearly all of them will leak into production, causing an insane amount of extra efforts to be needed to fix the bugs and repair the damages caused by them(so the professionals will be always fixing bugs instead of delivering features), provided that these recovery moves can be successful at all. A broken axe is like a codebase being totally technical bankrupt, where the only way out is to completely rewrite the whole thing from scratch, because no one can fathom a thing in that codebase at that point, and sticking to such a codebase is undoubtedly a sunk cost fallacy. While a codebase with overly ideal code qualities can deliver the needed features in the most effective and efficient ways possible as long as the codebase remains in this state, in practice the codebase will quickly degrade from such an ideal state to a more practical state where the code qualities are still high(on the other hand, going back to this state is very costly in general no matter how effective and efficient the refactoring is), because this state is essentially mysophobia in terms of code qualities. On the other hand, a codebase with reasonably high code qualities can be rather resistant from code quality deterioration(but far from 100% resistant of course), especially when the professional software engineers are disciplined, experienced and qualified, because degrading code qualities for such codebases are normally due to quick but dirty hacks, which shouldn't be frequently needed for senior professional software engineers. To summarize, a senior professional software engineer should strive to keep the codebase to have a reasonably high code quality, but not to the point of not even having good technical debts, and when the codebase has eventually degraded to have just barely tolerable code quality, it's time to refactor it to become having very satisfactory, but not overly ideal, code quality again, except in the case of occasional crunch modes, where even a disciplined, experienced and qualified expert will have to get the hands dirty once in a while on the still workable codebase but with temporarily unacceptable code quality, just that such crunch modes should be ended as soon as possible, which should be feasible with a well-established work schedule.
  25. Abbreviations HID - High Information Density LID - Low Information Density HIV - High Information Volume LIV - Low Information Volume HID/HIV - Those who can handle both HID and HIV well HID/LIV - Those who can handle HID well but can only handle LIV well LID/HIV - Those who can only handle LID well but can handle HIV well LID/LIV - Those who can only handle LID and LIV well TL;DR(The Whole Article Takes About 30 Minutes To Read In Full Depth) Information Density A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. Unfortunately, different programmers have different capacities on facing information density. In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard. Information Volume A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. Unfortunately, different software engineers have different capacities on facing information volume, so it seems that the best way's to find a happy medium that can break a very long function into fathomable chunks on one hand, while still keeping the function call stack manageable on the other. In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard. Combining Information Density With Information Volume While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep Teams With Programmers Having Different Styles It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective. Information Density A Math Analogy Consider the following math formula that are likely learnt in high school(Euler's Formula): https://github.com/Double-X/Image-List/blob/master/1590658698206.png Most of those who've studied high school math well should immediately fathom this, but for those who don't, you may want to try to fathom this text equivalent, which is more verbose: The Euler number to the power of (the imaginary unit multiplied by theta in radian) equals cosine theta in radian plus the imaginary unit multiplied by sine theta in radian I hope that those who can't fathom the above formula can at least fathom the above text This brings the importance of information density: A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. For instance, the above formula has HID whereas the above text has LID. In this example, those who're good at math in general and high school math in particular will likely prefer the formula over the text equivalent as they can probably fathom the former instantly while feeling that the latter's just wasting their time; Those who're bad at math in general and high school math in particular will likely prefer the text equivalent over the formula as they might not even know the fact that cisx is the short form of cosx + isinx. For those who can handle HID well, even if they don't know what Euler number is at all, they should still be able to deduce these corollaries within minutes if they know what cisx is: https://github.com/Double-X/Image-List/blob/master/1590660502890.png But for those who can only handle LID well, they'll unlikely be able to know what's going on at all, even if they know how to use the binomial theorem and the truncation operator. Now let's try to fathom this math formula that can be fathomed using just high school math: https://github.com/Double-X/Image-List/blob/master/1590661116897.png While it doesn't involve as much math knowledge nor concepts as those in the Euler's Formula, I'd guess that only those who're really, really exceptional in high school math and math in general can fathom this within seconds, let alone instantly, all because of this formula having such a ridiculously HID. If you can really fathom this instantly, then I'd think that you can really handle very HID very well, especially when it comes to math So what if we try to explain this by text? I'd come up with the following try: (The summation of m variables from x1 to xm) to the power of n equals the summation of (n elements, each being the combination of selecting r elements from n - 1 elements, where r is the outermost summation counter from 0 to n - 1, multiplied by the summation of (m elements, each being xi to the power of n - r, where i is the middle summation counter from 1 to m, multiplied by (the summation of m variables from x1 to xm except xi) to the power of r)) Maybe you can finally fathom what this formula is, but still probably not what it really means nor how to use it meaningfully, let alone deducing any useful corollary. However, with the text version, at least we can clearly see just how high the information density is in that formula, as even the information density for the text version isn't actually anything low. These 2 math examples aim to show that, HID, as long as being kept in moderation, is generally preferred over the LID counterparts. But once the information density becomes too unnecessarily and unreasonably high, the much more verbose versions seeming to be too verbose is actually preferred in general, especially when their information density isn't low. Some Examples Showing HID vs LID There are programming parallels to the above math analogy: terse and verbose codes. Unfortunately, different programmers have different capacities on facing information density, just like different people have different capacities on fathoming math. For instance, the ternary operator is a very obvious terse example on this(Javascript ES5): var x = condition1 ? value1 : condition2 ? value2 : value3; Whereas a verbose if/else if/else equivalent can be something like this: var x; if (condition1 === true) { x = value1; } else if (condition2 === true) { x = value2; } else { x = value3; } Those who're used to read and write terse codes will likely like the ternary operator version as the if/else if/else version will likely be just too verbose for them; Those who're used to read and write verbose codes will likely like the if/else if/else version as the ternary operator version will likely be just too terse for them(I've seen production codes with if (variable === true), so don't think that the if/else if/else version can only be totally made up examples). In this case, I've worked with both styles, and I guess that most programmers can handle both. Similarly, Javascript and some other languages support short circuit evaluation, which is also a terse style. For instance, the || and && operators can be short circuited this way: return isValid && (array || []).concat(object || canUseDefault && default); Where a verbose equivalent can be something like this(it's probably too verbose anyway): var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; Clearly the terse version has very HID while the verbose version has very LID. Those who can handle HID well will likely fathom the terse version instantly while needing minutes just to fathom what the verbose version's really trying to achieve and why it's not written in the terse version to avoid wasting time to read so much code doing so little meaningful things; Those who can only handle LID well will likely fathom the verbose version within minutes while probably giving up after trying to fathom the terse version for seconds and wonder what's the point of being concise when it's doing just so many things in just 1 line. In this case, I seriously suspect whether anyone fathoming Javascript will ever write in the verbose version at all, when the terse version is actually one of the popular idiomatic styles. Now let's try to fathom this really, really terse codes(I hope you won't face this in real life): for (var texts = [], num = min; num <= max; num += increment) { var primeMods = primes.map(function(prime) { return num % prime; }); texts.push(primeMods.reduce(function(text, mod, i) { return (text + (mod || words[i])).replace(mod, ""); }, "") || num); } return texts.join(textSeparator); If you can fathom this within seconds or even instantly, then I'd admit that you can really handle ridiculously HID exceptionally well. However, adding these lines will make it clear: var min = 1, max = 100, increment = 1; var primes = [3, 5], words = ["Fizz", "Buzz"], textSeparator = "\n"; So all it's trying to do is the very, very popular Fizz Buzz programming test in a ridiculously terse way. So let's try this much more verbose version of this Fizz Buzz programming test: var texts = []; for (var num = min; num <= max; num = num + increment) { var text = ""; var primeCount = primes.length; for (var i = 0; i < primeCount; i = i + 1) { var prime = primes[i]; var mod = num % prime; if (mod === 0) { var word = words[i]; text = text + word; } } if (text === "") { texts.push(num); } else { texts.push(text); } } return texts.join(textSeparator); Even those who can handle very HID well should still be able to fathom this verbose version within seconds, so do those who can only handle very LID well. Also, considering the inherent complexity of this generalized Fizz Buzz, the verbose version doesn't have much boilerplate, even when compared to the terse version, so I don't think those who can handle very HID well will complain about the verbose version much. On the other hand, I doubt whether those who can only handle very LID well can even fathom the terse version, let alone in a reasonable amount of time(like minutes), if I didn't tell that it's just Fizz Buzz. In this case, I really doubt what's the point of writing in the terse version when I don't see any nontrivial issue in the verbose version(while the terse version's likely harder to fathom). Back To The Math Analogy Imagine that a mathematician and math professor who's used to teach postdoc math now have to teach high school math to elementary math students(I've heard that a very small amount of parents are so ridiculous to want their elementary children to learn high school math even when those children aren't interested in nor good at math). That's almost mission impossible, but all that teacher can do is to first consolidate the elementary math foundation of those students while fostering their interest in math, then gradually progress to middle school math, and finally high school math once those students are good at middle school math. All those students can do is to work extremely hard to catch up such great hurdles. Unfortunately, it seems to me that it'd take far too much resources, especially time, when those who can handle very HID well try to teach those who can only handle very LID well to handle HID. Even when those who can only handle very LID well can eventually be nurtured to meet the needs imposed by the codebase, it's still unlikely to be worth it, especially for software teams with very tight budgets, no matter how well intentioned it is. So should those who can only handle very LID well train up themselves to be able to handle HID? I hope so, but I doubt that it's similar to asking a high school student to fathom postdoc math. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to apply actually basic math formulae that are just written in terse styles; Should those who can handle very HID well learn how to deal with LID well as well? I hope so, but I doubt that's similar to asking mathematicians to abandon their mother tongue when it comes to math(using words instead of symbols to express math). While it's possible, I still guess that most of us will think that it's so excessively ineffective and inefficient just to communicate with those who're very poor at math when discussing about advanced math. So it seems that maybe those who can handle HID well and those who can only handle LID well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HID well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HID well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HID well, which is overlooked The candidate screening process must be able to tell who can handle HID well Most importantly, the team must be able to hire enough candidates who can handle HID well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HID well should try their best to only touch the HID part of the codebase, while those who can only handle very LID well should try their best to only touch the LID part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. A Considerable Solution With an IDE supporting collapsing comments, one can try something like this: /* var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; */ return isValid && (array || []).concat(object || canUseDefault && default); Of course it's not practical when the majority of the codebase's so terse that those who can only handle very LID well will struggle most of the time, but those who can handle very HID well can try to do the former some favors when there aren't lots of terse codes for them. The point of this comment's to be a working compromise between the needs of reading codes effectively and efficiently for those who can handle very HID well, and the needs of fathoming code easily and simply for those who can only handle very LID well. Summary In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard. Information Volume An Eating Analogy Let's say we're ridiculously big eaters who can eat 1kg of meat per meal. But can we eat all that 1kg of meat in just 1 chunk? Probably not, as our mouth just won't be big enough, so we'll have to cut it into digestible chunks. However, can we eat it if it becomes a 1kg of very fine-grained meat powder? Maybe, but that's likely daunting or even dangerous(extremely high risk of severe choking) for most of us. So it seems that the best way's to find a happy medium that works for us, like cutting it into chunks that are just small enough for our mouth to digest. There might still be many chunks but at least they'll be manageable enough. The same can be largely applied to fathoming codes, even though there are still differences. Let's say you're reading a well-documented function with 100k lines and none of its business logic are duplicated in the entire codebase(so breaking this function won't help code reuse right now). Unless we're so good at fathoming big functions that we can keep all these 100k lines of implementation details in our head as a whole, reading such a function will likely be daunting or even dangerous(extremely high risk of fathom it all wrong) for most of us, assuming that we can indeed fathom it within a feasible amount of time(like within hours). On the other hand, if we break that 100k line function into extremely small functions so that the function call stack can be as deep as 100 calls, we'll probably be in really big trouble when we've to debug these functions having bugs that don't have apparently obvious causes nor caught by the current test suite(no test suite can catch all bugs after all). After all, traversing such a deep call stack without getting lost and having to start all over again is like eating tons of very fine-grained meat powders without ever choking severely. Even if we can eventually fix all those bugs with the test suite updated, it'll still unlikely to be done within a reasonable amount of time(talking about days or even weeks when the time budget is tight). This brings the importance of information volume: A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. For instance, the above 100k line function has HIV whereas the above small functions with deep call stack has LIV. So it seems that the best way's to find a happy medium that can break that 100k line function into fathomable chunks on one hand, while still keeping the call stack manageable on the other. For instance, if possible, breaking that 100k line function into those in which the largest ones are 1k line functions and the ones with the deepest call stack is 10 calls can be a good enough balance. While fathoming a 1k line function is still hard for most of us, it's at least practical; While debugging functions having call stacks with 10 calls is still time-consuming for most of us, it's at least realistic to be done within a tight budget. A Small Example Showing HIV vs LIV Unfortunately, different software engineers have different capacities on facing information volume, just like different people have different mouth size. Consider the following small example(Some of my Javascript ES5 codes with comments removed): LIV Version(17 methods with the largest being 4 lines and the deepest call stack being 11) - $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) { var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { return this._associativeResult(note, argObj_, funcNameSuffix); } return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) { var partResults = this._partResults(note, argObj_, funcNameSuffix); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) { var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) { var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) { var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) { var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; HIV Version(10 methods with the largest being 12 lines and the deepest call stack being 5) - $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedResultWithCache START var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; // $._updatedResultWithCache END }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); var partResults = priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); // $._nonAssociativeResult END }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedPartResultWithCache_ START var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; // $._updatedPartResultWithCache_ END }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); if (cache) return cache; // $._updatedPairFuncListWithCache START var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; // $._updatedPairFuncListWithCache END }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); if (cache) return cache; // $._updatedPairFuncListPartWithCache START var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; // $._updatedPairFuncListPartWithCache END }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; In case you can't fathom what this example's about, you can read this simple flow chart(It doesn't mention the fact that the actual codes also handle whether the cache will be used): Even though the underlying business logic's easy to fathom, different people will likely react to the HIV and LIV Version differently. Those who can handle very HIV well will likely find the LIV version less readable due to having to unnecessarily traverse all these excessively small methods(the smallest ones being 1 liners) and enduring the highest call stack of 11 calls(from $.result to $._pairFuncListPartWithoutCache); Those who can only handle very LIV well will likely find the HIV version less readable due to having to unnecessarily fathom all these excessively mixed implementation details as a single unit in one go from the biggest method with 12 lines and enduring the presence of 3 different levels of abstractions combined just in the biggest and most complex method($._uncachedResult). Bear in mind that it's just a small example which is easy to fathom and simple to explain, so the differences between the HIV and LIV styles and the potential conflicts between those who can handle very HIV well and those who can only handle very LIV well will only be even larger and harder to resolve when it comes to massive real life production codebases. Back To The Eating Analogy Imagine that the size of the mouth of various people can vary so much that the largest digestible chunk of those with the smallest mouth are as small as a very fine-grained powder in the eyes of those with the largest mouth. Let's say that these 2 extremes are going to eat together sharing the same meal set. How should these meals be prepared? An obvious way's to give them different tools to break these meals into digestible chunks of sizes suiting their needs so they'll respectively use the tools that are appropriate for them, meaning that the meal provider won't try to do these jobs themselves at all. It's possible that those with the smallest mouth will happily break those meals into very fine-grained powders, while those with the largest mouth will just eat each individual food as a whole without much trouble. Unfortunately, it seems to me that there's still no well battle-tested automatic tools that can effectively and efficiently break a large code chunk into well-defined smaller digestible code chunks with configurable size and complexity without nontrivial side effects, so those who can only handle very LIV well will have to do it manually when having to fathom large functions. Also, even when there's such a tool, such automatic work's still effectively refactoring that function, thus probably irritating colleagues who can handle very HIV well. So should those who can only handle very LIV well train up themselves to be able to deal with HIV? I hope so, but I doubt that's similar to asking those with very small mouths to increase their mouth size. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to eat foods in chunks that are too large for them; Should those who can handle very HIV well learn how to deal with LIV well as well? I hope so, but I doubt that's similar to asking those with very large mouths to force themselves to eat very fine-grained meat powders without ever choking severely(getting lost when traversing a very deep call stack). While it's possible, I still guess that most of us will think that it's so risky and unreasonable just to eat foods as very fine-grained powders unless they really have no other choices at all(meaning that they should actually avoid these as much as possible). So it seems that maybe those who can handle HIV well and those who can only handle LIV well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HIV well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HIV well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HIV well, which is overlooked The candidate screening process must be able to tell who can handle HIV well Most importantly, the team must be able to hire enough candidates who can handle HIV well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HIV well should try their best to only touch the HIV part of the codebase, while those who can only handle very LIV well should try their best to only touch the LIV part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. An Imagined Solution Let's say there's an IDE which can display the function calls in the inlined form, like from: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) { var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { return this._associativeResult(note, argObj_, funcNameSuffix); } return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) { var partResults = this._partResults(note, argObj_, funcNameSuffix); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) { var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) { var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) { var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) { var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; To be displayed as something like this: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { // $._uncachedResult START if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { // $._partResultWithoutCache START // $._uncachedPartResult_ START // $._pairFuncListPartWithoutCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); // $._pairFuncListPartWithoutCache END if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); // $._uncachedPartResult_ END // $._partResultWithoutCache END }).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START // $._pairFuncListWithoutCache START // $._uncachedPairFuncList START var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { // $._pairFuncListPartWithoutCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var l = this._cache.partListData( part, this._battler).map(func); // $._pairFuncListPartWithoutCache END return list.concat(l); }.bind(this), []); // $._uncachedPairFuncList END // $._pairFuncListWithoutCache END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult); // $._nonAssociativeResult END // $._uncachedResult END } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedResultWithCache START // $._uncachedResult START var result; if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { // $._partResultWithCache START var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedPartResultWithCache_ START // $._uncachedPartResult_ START // $._pairFuncListPartWithCache START var c = this._cache.pairFuncListPart_(note, part); var list; if (c) { list = c; } else { // $._updatedPairFuncListPartWithCache START // $._uncachedPairFuncListPart START var func = this._pairs.pairFuncs.bind(this._pairs, note); list = this._cache.partListData( part, this._battler).map(func); // $._uncachedPairFuncListPart END this._cache.updatePairFuncListPart(note, part, list); // $._updatedPairFuncListPartWithCache END } // $._pairFuncListPartWithCache END var result = undefined; if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_); } // $._uncachedPartResult_ END this._cache.updatePartResult(note, argObj_, part, result); return result; // $._updatedPartResultWithCache_ END // $._partResultWithCache END }).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START // $._pairFuncListWithCache START var cache = this._cache.pairFuncList_(note), list; if (cache) { list = cache; } else { // $._updatedPairFuncListWithCache START // $._uncachedPairFuncList START var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { // $._pairFuncListPartWithCache START var cache = this._cache.pairFuncListPart_(note, part); var l; if (cache) { l = cache; } else { // $._updatedPairFuncListPartWithCache START // $._uncachedPairFuncListPart START var func = this._pairs.pairFuncs.bind(this._pairs, note); l = this._cache.partListData( part, this._battler).map(func); // $._uncachedPairFuncListPart END this._cache.updatePairFuncListPart(note, part, l); // $._updatedPairFuncListPartWithCache END } return list.concat(l); // $._pairFuncListPartWithCache END }.bind(this), []); // $._uncachedPairFuncList END this._cache.updatePairFuncList(note, list); // $._updatedPairFuncListWithCache END } // $._pairFuncListWithCache END var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult); // $._nonAssociativeResult END // $._uncachedResult END this._cache.updateResult(note, argObj_, result); return result; // $._updatedResultWithCache END }; Or this one without comments indicating the starts and ends of the inlined functions: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { if (this._rules.isAssociative(note)) { var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }).filter(_SATB.IS_VALID_RESULT); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); } var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); var l = this._cache.partListData( part, this._battler).map(func); return list.concat(l); }.bind(this), []); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult); } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; var result; if (this._rules.isAssociative(note)) { var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; var c = this._cache.pairFuncListPart_(note, part); var list; if (c) { list = c; } else { var func = this._pairs.pairFuncs.bind(this._pairs, note); list = this._cache.partListData( part, this._battler).map(func); this._cache.updatePairFuncListPart(note, part, list); } var result = undefined; if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_); } this._cache.updatePartResult(note, argObj_, part, result); return result; }).filter(_SATB.IS_VALID_RESULT); var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult); } var cache = this._cache.pairFuncList_(note), list; if (cache) { list = cache; } else { var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { var cache = this._cache.pairFuncListPart_(note, part); var l; if (cache) { l = cache; } else { var func = this._pairs.pairFuncs.bind(this._pairs, note); l = this._cache.partListData( part, this._battler).map(func); this._cache.updatePairFuncListPart(note, part, l); } return list.concat(l); }.bind(this), []); this._cache.updatePairFuncList(note, list); } var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult); this._cache.updateResult(note, argObj_, result); return result; }; With just 1 click on $.result. Bear in mind that the actual codebase hasn't changed one bit, it's just that the IDE will display the codes from the original LIV form to the new HIV form. The goal this feature's to keep the codebase in the LIV form, while still letting those who can handle HIV well to be able to read the codebase displayed in the HIV version. It's very unlikely for those who can only handle very LIV well to be able to fathom such a complicated and convoluted method with 73 lines and so many different levels of varying abstractions and implementation details all mixed up together, not to mention the really vast amount of completely needless code duplication that aren't even easy nor simple to spot fast; Those who can handle very HIV well, however, may feel that a 73 line method is so small that they can hold everything inside in their head as a whole very quickly without a hassle. Of course, one doesn't have to show everything at once, so besides the aforementioned feature that inlines everything in the reading mode with just 1 click, the IDE should also support inlining a function at a time. Let's say we're to reveal _uncachedPairFuncListPart: $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._uncachedPairFuncListPart(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; Clicking that method name in the above method should lead to something like this: $._updatedPairFuncListPartWithCache = function(note, part) { // $._updatedPairFuncListPartWithCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); // $._updatedPairFuncListPartWithCache END this._cache.updatePairFuncListPart(note, part, list); return list; }; Similarly, clicking the method name updatePairFuncListPart should reveal the implemention details of that method of this._cache, provided that the IDE can access the code of that class. Such an IDE, if even possible in the foreseeable future, should at least reduce the severity of traversing a deep call stack with tons of small functions for those who can handle very HIV well, if not removing the problem entirely, without forcing those who can only handle very LIV well to deal with HIV, and without the issue of fighting for refactoring in this regard. Summary In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard. Combining Information Density With Information Volume Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Imagine that you're reading a well-documented 100k line function where almost every line's written like some of the most complex math formulae. I'd guess that even the best of the best software engineers will never ever want to touch this perverted beast again in their lives. Usually such codebase are considered dead and will thus be probably rewritten from scratch. Of course, HID + HIV isn't always this extreme, as the aforementioned 73 line version of $.result also falls into this category. Even though it'd still be a hellish nightmare for most software engineers to work with if many functions in the codebase are written this way, it's still feasible to refactor them into very high quality code within a reasonably tight budget if we've the highest devotions, diligence and disciplines possible. While such an iron fist approach should only be the last resort, sometimes the it's called for so we should be ready. Nevertheless, try to avoid HID + HIV as much as possible, unless the situation really, really calls for it, like optimizing a massive production codebase to death(e.g.: gameplay codes), or when the problem domain's so chaotic and unstable that no sane nor sensible architecture will survive for even just a short time(pathetic architectures can be way worse than none). If you still want to use this style even when it's clearly unnecessary, you should have the most solid reasons and evidence possible to prove that it's indeed doing more good than harm. Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First For instance, the below codes falls into this category: return isValid && (array || []).concat(object || canUseDefault && default); Imagine that you're reading a codebase having mostly well-defined and well-documented small functions(but far from being mostly 1 liners) but most of those small functions are written like some the most complex math formulae. While fathoming such codes at first will be very difficult, because the functions are well-documented, those functions will be easy to edit once you've fathomed it with the help of those comments; Because the functions are small enough and well-defined, those functions will be easy to use once you've fathomed how they're being called with the help of those callers who're themselves high quality codes. Of course, HID + LIV doesn't always mean small short term pains with large long term pleasures, as it's impossible to ensure that none of those abstractions will ever leak in nontrivial ways. While the codebase will be easy to work with when it only ever has bugs that are either caught by the test suite or have at least some obvious causes, such codebase can still be daunting to work with once it produces rare bugs that are hard to even reproduce, all because of the fact that it's very hard to form the full pictures with every last bit of nontrivial relevant detail of massive codebases having mostly small but very terse functions. Nevertheless, as long as all things are kept in moderation(one can always try in this regard), HID + LIV is generally advantageous as long as the codebase's large enough to warrant large scale software architectures and designs(the lifespan of the codebase should also be long enough), but not so large that no one can form the full picture anymore, as the long term pleasures will likely be large and long enough to outweigh short term pains a lot here. Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Think of an extremely verbose codebase having full of boilerplate and exceptionally long functions. Maybe those functions are long because of the verbosity, but you usually can't tell before actually reading them all. Anyway, you'll probably feel that the codebase's just wasting lots of your time once you realize that most of those long functions aren't actually doing much. Think of the aforementioned 28 line verbose Javascript examples having an extremely easy, simple and small terse 1 line counterpart, and think of the former being ubiquitous in the codebase. I guess that even the most verbose software engineers will want to refactor it all, as working with it'd just be way too ineffective and inefficient otherwise. Of course, LID + HIV isn't always that bad, especially when things are kept in moderation. At least, it'd be nice for most newcomers to fathom the codebase, so codebases written in this style can actually be very beginner-friendly, which is especially important for software teams having very high turnover rates. Even though it's unlikely to be able to work with such codebase effectively nor efficiently no matter how much you've fathomed it due to the heavy verbosity and loads of boilerplate, the problem will be less severe if it's short-lived. Also, writing codes in this style can be extremely fast at first, even though it'll gradually become slower and slower, so this style's very useful in at least prototyping/making PoCs. Nevertheless, LID + HIV shouldn't be used on codebases that'd already be very large without the extra verbosity nor boilerplate, especially when it's going to have a very long life span. Just think of a codebase that can be controlled into the 100k scale all with very terse codes(but still readable), but reaching the 10M scale because of complete refactoring of all those terse codes into tons of verbose codes with boilerplate. Needless to say, almost no one will continue on this road if he/she knows that the codebase will become that large that way. Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep For instance, the below codes fall into this category: /* This is the original codes $._chainedResult = function(list, note, argObj_, initVal_) { var chainedResultFunc = this._rules.chainResultFunc(note); return chainedResultFunc(list, note, argObj_, initVal_); }; */ // This is the refactored codes $._chainedResult = function(list, note, argObj_, initVal_) { var chainedResultFunc = this._chainedResultFunc(note); return this._runChainedResult( list, note, argObj_, initVal_, chainedResultFunc); }; $._chainedResultFunc = function(note) { return this._rules.chainResultFunc(note); }; $._runChainedResult = function(list, note, argObj_, initVal_, resultFunc) { return resultFunc(list, note, argObj_, initVal_); }; // Think of a codebase with less than 100k lines but with already way more than 1k classes/interfaces and 10k functions/methods. It's almost a given that the deepest call stack in the codebase will be so deep that it can even approach the 100 call mark. It's because the only way for very small functions to be very verbose with tons of boilerplate is that most of those small functions aren't actually doing anything meaningful. We're talking about deeply nested delegates/forwarding functions which are all indeed doing very easy, simple and small jobs, and tons of interfaces or explicit dependencies having only 1 implementation or concrete dependency(configurable options with only 1 option ever used also has this issue). Of course, LID + LIV does have its places, especially when the business requirements always change so abruptly, frequently and unpredicably that even the most reasonable assumptions can be suddenly violated without any reason at all(I've worked with 1 such project). As long as there can still be sane and sensible architectures that can last very long, if the codebase isn't flexible in almost every direction, the software teams won't be able to make it when they've to implement absurd changes with ridiculously tight budgets and schedules. And the only way for the codebase to be possible to be so flexible is to have as many well-defined interfaces and seams as possible, as long as everything else is still in moderation. For the newcomers, the codebase will seem to be overengineered over nothing already happened, but that's what you'd likely do when you can never know what's invariant. Nevertheless, LID + LIV should still be refactored once there are solid reasons and evidences to prove that the codebase can begin to stablize, or the hidden technical debt incurred from excessive overengineering can quickly accumulate to the point of no return. At that point, even understanding the most common call stack can be almost impossible. Of course, if the codebase can really never stablize, then one can only hope for the best and be prepared for the worst, as such projects are likely death marches, or slowly becoming one. Rare exceptions are that, some codebases have to be this way, like the default RPG Maker MV codebase, due to the business model that any RPG Maker MV user can have any feature request and any RPG Maker MV plugin developer can develop any plugin with any feature. Summary While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep Teams With Programmers Having Different Styles Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes While both can work with very HID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The latter values codebase quality over software engineer mental capacity due to their limits on taking information volume, while the former values the opposite due to their exceptionally strong mental power. Thus the former will likely think of the latter as being too weak to fathom the codes and they're thus the ones to blame, while the latter will probably think of the former as having too little architecture in mind and they're thus the ones to blame, as architectures that are beneficial or even necessary for the latter will probably be severe obstacles for the former. Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things While both can work with very HIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values function simplicity over function capabilities due to their limits on taking information density, while the former values the opposite due to their extremely strong information density decoding. Thus the former will likely think of the latter as doing too little things that actually matter in terms of important business logic as simplicity for the latter means time wasted for the former, while the latter will probably think of the former as being too needlessly complex when it comes to implementing important business logic, as development speed for the former means complexity that are just too high for the latter(no matter how hard they try). Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak It's clear that these 2 groups are at the complete opposites - The former preferring massive balls of complicated and convoluted spaghetti legacy over too many small functions with the call stacks being too deep due to the heavy need of optimizing the codebase to death, while the latter preferring the opposite due to the heavy need of making the codebase very flexible. Thus the former will likely think of the latter as over-engineering freaks while the latter will probably think of the former as over-optimization freaks, as codebase optimization and flexibility are often somehow at odds with each other, especially when one is heavily done. Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose It's clear that these 2 groups are at the complete opposites - The former preferring otherwise high quality codes that are hard to fathom at first over excessively verbose codes with tons of redundant boilerplate due to the heavy emphasis on the large long term pleasures, while the latter preferring the opposite due to the heavy emphasis on the small short term pains. Thus the former will likely think of the latter as being too messy and verbose while the latter will probably think of the former as being too concise and organized, as long term pleasures from the high codebase qualities are often at odds with short term pains of the newcomers fathoming the codebase at first, especially when one is heavily emphasized over the other. Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient While both can only work with very LIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values the learning cost over maintenance cost(the cost of reading already fathomed codes during maintenance) due to their limits on taking information density, while the former values the opposite due to their good information density skill and reading speed demands. Thus the former will likely think of the latter as being too ineffective and inefficient when writing codes that are easy to fathom in the short term but time-consuming to read in the long term, while the latter will likely think of the former as being too harsh to newcomers when writing codes that are fast to read in the long term but hard to fathom in the short term. Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles While both can only work with very LID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The former values codebase beginner friendliness over software flexibility due to their generally lower tolerance on very small functions, while the latter values the opposite due to their limited information volume capacity and high familiarity with very small and flexible functions. Thus the former will likely think of the latter as being too flexible towards cases that are almost impossible to happen under the current business requirements due to such codebases being generally harder for newcomers to fathom, while the latter will likely think of the former as being too friendly towards beginners at the expense of writing too rigid codes due to codebases being beginner friendly are usually those just thinking about the present needs. Summary It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective.
×
Top ArrowTop Arrow Highlighted