Jump to content

DoubleX

Member
  • Content Count

    1,000
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DoubleX

  1. Note This plugin's available for commercial use Purpose Fixes DoubleX RMMV Popularized ATB compatibility issues Games using this plugin None so far Action Sequences Addressed Plugins Video https://www.youtube.com/watch?v=aoBI3DaE3g8 Prerequisites Plugins: 1. DoubleX RMMV Popularized ATB Core Abilities: 1. Nothing special Instructions Place this plugin below all DoubleX RMMV Popularized ATB addons Terms Of Use You shall keep this plugin's Plugin Info part's contents intact You shalln't claim that this plugin's written by anyone other than DoubleX or his aliases None of the above applies to DoubleX or his/her aliases Changelog Download Link DoubleX RMMV Popularized ATB Compatibility
  2. Updates * v1.03f(GMT 0700 23-6-2021): * 1. Fixed the visuals of the action sequences of actor sprites being * reset when other actors are inputable bug
  3. Descriptions The following image briefly outlines the core structure of this whole idea, which is based on the idea of applying purely server-side rendering on games: https://github.com/Double-X/Image-List/blob/master/Future%20MP%20Games%20Architecture.png Note that the client side should have next to no game state or data, nor audio/visual assets, as they're supposed to never leave the server side. The following's the general flow of games using this architecture(all these happen per frame): 1. The players start running the game with the client IO 2. The players setup input configurations(keyboard mapping, mouse sensitivity, mouse acceleration, etc), graphics configurations(resolution, fps, gamma, etc), client configurations(player name, player skin, other preferences not impacting gameplay, etc), and anything that only the players can have information of 3. The players connect to servers 4. The players send all those configurations and settings to the servers(those details will be sent again if players changed them during the game within the same servers) 5. The players makes raw inputs(like keyboard presses, mouse clicks, etc) as they play the game 6. The client IO captures those raw player inputs and sends them to the server IO(but there's never any game data/state synchronization among them) 7. The server IO combines those raw player inputs and the player input configurations for each player to form commands that the game can understand 8. Those game commands generated by all players in the server will update the current game state set 9. The game polls the updated current game state set to form the new camera data for each player 10. The game combines the camera data with the player graphics configurations to generate the rendered graphics markups(with all relevant audio/visual assets used entirely in this step) which are highly compressed and obfuscated and have the least amount of game state information possible 11. The server IO captures the rendered graphics markups and send them to the client IO of each player(and nothing else will ever be sent in this direction) 12. The client IO draws the fully rendered graphics markups(without needing nor knowing any audio/visual asset) on the game screen visible by each player The aforementioned flow can also be represented this way: https://github.com/Double-X/Image-List/blob/master/Future%20MP%20Games%20Architecture%20Flow.png Differences From Cloud Gaming Do note that it's different from cloud gaming in the case of multiplayer(although it's effectively the same in the case of single player), because cloud gaming doesn't demand the games to be specifically designed for that, while this architecture does, and the difference means that: 1. In cloud gaming, different players rent different remote machines, each hosting the traditional client side of the game, which communicates with the traditional server side of the game in the same real server that's distinct from those middlemen devices, meaning that there will be at most 2 round trips per frame(between the client and the remote machine, and between the remote machine and the real server), so if the remote machines isn't physically close to the real server, and the players aren't physically close to the remote machines, the latency can raise to an absurd level 2. This architecture forces games complying with it to be designed differently from the traditional counterparts right from the start, so it can install the client version(having minimal contents) directly into the device for each player, which directly communicates with the server side of the game in the same server(which has almost everything), thus removing the need of a remote machine per player as the middleman, and hence the problems created by it(latency and the setup/maintenance cost from those remote machines) 3. The full cycle of the communications in cloud gaming is the following: - The player machines send the raw input commands to the remote machines - The remote machines convert those commands into new game states of the client side of the game there - The client side of the game in those remote machines synchronize with the server side of the game in the real server - The remote machines draw new visuals on their screens and play new audios based on the latest game states on the client side of the game there - The remote machines send those audio and visual information to the player machines - The player machines redraw those new audios and visuals there 4. The full cycle of the communications of this architecture is the following: - The player machines send the raw input commands directly to the real server - The real server convert those commands into the new game states of the server side of the game there - The real server send new audio and visual information to the player machines based on the involved parts of the latest game states on the server side of the game there - The player machines draw those new audios and visuals there 3 + 4 means the rendering actually happens 2 times in cloud gaming - 1 in the remote machines and 1 in the player machines, while the same happens just once in this architecture - just the player machines directly, and the redundant rendering in cloud gaming can contribute quite a lot to the end latency experienced by players, so this is another advantage of this architecture over cloud gaming. In short, cloud gaming supports games not having cloud gaming in mind(and is thus backward compatible) but can suffer from insane latency and increased business costs(which will be transferred to players), while this architecture only supports games targeting it specifically(and is thus not backward compatible) but removes quite some pains from the remote machine in cloud gaming(this architecture also has some other advantages over cloud gaming, but they’ll be covered in the next section). On a side note: If some cloud gaming platforms don't let their players to join servers outside of them, while it'd remove the issue of having 3 entities instead of just 2 in the connection, it'd also be more restrictive than this architecture, because the latter only restricts all players to same the same game using it. Advantages The advantages of this architecture at least include the following: 1. The game requirements on the client side can be a lot lower than the traditional architecture(although cloud gaming also has this advantage), as now all the client side does is sending the captured raw player inputs(keyboard presses, mouse clicks, etc) to the server side, and draws the received rendered graphics markup(without using any audio/visual assets in this step and the client side doesn't have any of them anyway) on the game screen visible by each player 2. Cheating will become next to impossible(cloud gaming may or may not have this advantage), as all cheats are based on game information, and even the state of the art machine vision still can't retrieve all the information needed for cheating within a frame(even if it just needs 0.5 seconds to do so, it's already too late in the case of professional FPS E-Sports, not to mention that the rendered graphics markup can change per frame, making machine vision even harder to work well there), and it'd be a epoch-making breakthrough on machine vision if the cheats can indeed generate the correct raw player inputs per frame(especially when the rendered graphics markups are highly obfuscated), which is definitely doing way more good than harm to the mankind, so games using this architecture can actually help pushing the machine vision researches 3. Game piracy and plagiarisms will become a lot more costly and difficult(cloud gaming may or may not have this advantage), as the majority of the game contents and files never leave the servers, meaning that those servers will have to be hacked first before those pirates can crack those games, and hacking a server with the very top-notch security(perhaps monitored by network and server security experts as well) is a very serious business that not many will even have a chance 4. Game data and state synchronization should no longer be an issue(while cloud gaming won't have this advantage), because the client side should've nearly no game data and state, meaning that there's should be nothing to synchronize with, thus this setup not only removes tons of game data/state integrity troubles and network issues, but also deliberate or accidental exploits like lag switching(so servers no longer has to kick players with legitimately high latency because those players won't have any advantage anymore, due to the fact that such exploits would just cause the users to become inactive for a very short time per lag in the server, thus they'd be the only ones being under disadvantages) Disadvantages The disadvantages of this architecture at least include the following: 1. The game requirements and the maintenance cost on the server side will become ridiculous - perhaps a supercomputer, computer cluster, or a computer cloud will be needed for each server, and I just don't know how it'll even be feasible for MMO to use this architecture in the foreseeable future 2. The network traffic in this architecture will be absurdly high, because all players are sending raw input to the same server, which sends back the rendered graphics markup to each player(even though it's already highly compressed), all happening per frame, meaning that this can lead to serious connection issues with servers having low capacity and/or players with low connection speed/limited network data usage 3. The rendered graphics markup needs to be totally lossless in terms of visual qualities on one hand, otherwise it'd be a bane for games needing the state of the art graphics; It also needs to be highly compressed and obfuscated on the other, because the network traffic must be minimized and the markup needs to defend against cheats. These mean it'd be extremely hard to properly implement the rendered graphics markup, let alone without creating new problems 4. The inherent network latency due to the physical distance between the clients and the servers will be even more severe, because now the client has to communicate with the server per frame, meaning that the servers must be physically located nearby the players, and thus many servers across many different cities will be needed How Disadvantages Diminish Over Time Clearly, the advantages from this architecture will be unprecedented if the architecture itself can ever be realized, while its disadvantages are all hardware and technical limitations that will become less and less significant, and will eventually become trivial. So while this architecture won't be the reality in the foreseeable future(at least several years from now), I still believe that it'll be the distant future(probably in terms of decades). For instance, let's say a player joins a server being 300km away from his/her device(which is a bit far away already) to play a game with a 1080p@120Hz setup using this architecture, and the full latency would have to meet the following requirements in order to have everything done within around 9ms, which is a bit more than the maximum time allowed in 120 FPS: The client will take around 1ms to capture and start sending the raw input commands from the player The minimum ping, which is limited by the speed of light, will be 2 * 300km / 300,000km per second = around 2ms The server will take around 1ms to receive and combine all raw input commands from all players The server will take around 1ms to convert the current game state set with those raw input commands to form the new game state set The server will take around 1ms to generate all rendered graphics markups(which are lossless, highly compressed and highly obfuscated) from the new camera state of all players The server will take around 1ms to start sending those rendered graphics markups to all players The client will take around 1ms to receive and decompress the rendered graphics markup of the corresponding player The client will take around 1ms to render the decompressed rendered graphics markup as the end result being perceived by the player directly Do note that hardware limitations, like mouse and keyboard polling rate, as well as monitor response time, are ignored, because they'll always be there regardless of how a multiplayer game is designed and played. Of course, the above numbers are just outright impossible within years, especially when there are dozens of players in the same server, but they should become something very real after a decade or 2, because by then the hardware we've should be much, much more powerful than those right now. Similarly, for a 1080p@120Hz setup, if the rendering is lossless but isn't compressed at all, it'd need (1920 * 1080) pixels * 32 bit * 120 FPS + little bandwidth from raw inputting commands sent to the server = Around 1GB/s per player, which is of course insane to the extreme right now, and the numbers for 4K@240Hz and 8K@480Hz(assuming that it'll or is always a real thing) setups will be around 8GB/s and 64GB/s per player respectively, which are just incredibly ridiculous in the foreseeable future. However, as the rendering markups sent to the client should be highly compressed, the actual numbers shouldn't be this large, and even if the rendering isn't compressed at all, in the distinct future, when 6G, or even newer generations, become the new norm, these numbers, while will still be quite something, should become practical enough in everyday gaming, and not just for enthusiasts. Nevertheless, there might be an absolute limit on the screen resolution and/or FPS that can be supported by this architecture no matter how powerful the hardware is, so while I think this architecture will be the distinct future(like after a decade or 2), it probably won't be the only way multiplayer games being written and played, because the other models still have their values even by then. Future Implications If this architecture becomes the practical mainstream, the following will be at least some of the implications: 1. The direct one time price of the games, and also the indirect one(the need to upgrade the client machine to play those games) will be noticeably lower, as the games are much less demanding on the client side(drawing an already rendered graphics markup, especially without needing any audio nor visual assets, is generally a much, much easier, simpler and smaller task than generating that markup itself, and the client side hosts almost no game data nor state so the hard disk space and memory required will also be a lot lower) 2. The periodic subscription fee will exist in more and more games, and those already having such fee will likely increase the fee, in order to compensate for the increasing game maintenance cost from upgraded servers(these maintenance cost increments will eventually be cancelled out by hardware improvements causing the same hardware to become cheaper and cheaper) 3. The focus of companies previously making high end client CPU, GPU, RAM, hard disk, motherboard, etc will gradually shift their business into making server counterparts, because the demands of high end hardware will be relatively smaller and smaller on the client side, but will be relatively larger and larger on the server side 4. The demands of high end servers will be higher and higher, not just from game companies, but also for some players investing a lot into those games, because they'd have the incentive to build some such servers themselves, then either use them to host some games, or rent those servers to others who do Anti-Cheating In the case of highly competitive E-Sports, the server can even implement some kind of fuzzy logic, which is fine-tuned with a deep learning AI, to help report suspicious raw player input sets(consisted of keyboard presses, mouse clicks, etc) with a rating on how suspicious it is, which can be further broken down to more detailed components on why they're that suspicious. This can only be done effectively and efficiently if the server has direct access to the raw player input set, which is one of the cornerstones of this very architecture. Combining this with traditional anti cheat measures, like having a server with the highest security level, an in-game admin having server level access to monitor all players in the server(now with the aid of the AI reporting suspicious raw player input sets for each player), another admin for each team/side to monitor player activities, a camera for each player, and thoroughly inspected player hardware, it'll not only make cheating next to impossible in major LAN events(also being cut off from external connections), but also so obviously infeasible and unrealistic that almost everyone will agree that cheating is indeed nearly impossible there, thus drastically increasing their confidence on the match fairness. Hybrid Models Of course, games can also use a hybrid model, and this especially applies to multiplayer games also having single player modes. If the games support single player, of course the client side needs to have everything(and the piracy/plagiarism issues will be back), it's just that most of them won't be used in multiplayer if this architecture's used. If the games runs on the multiplayer, the hosting server can choose(before hosting the game) whether this architecture's used(of course, only players with the full client side package can join servers using the traditional counterpart, and only players with the server side subscription can join servers using this architecture). Alternatively, players can choose to play single player modes with a server for each player, and those servers are provided by the game company, causing players to be able to play otherwise extremely demanding games with a low-end machine(of course the players will need to apply for the periodic subscriptions to have access of this kind of single player modes). On the business side, it means such games will have a client side package, with a one time price for everything in the client side, and a server side package, with a periodic subscription for being able to play multiplayer, and single player with a dedicated server provided, then the players can buy either one, or both, depending on their needs and wants. This hybrid model, if both technically and economically feasible, is perhaps the best model I can think of.
  4. Let's imagine that the job of a harvester is to use an axe to harvest trees, and the axe will deteriorate over time. Assuming that the following's the expected performance of the axe: Fully sharp axe(extremely excellent effectiveness and efficiency; ideal defect rates) - 1 tree cut / hour 1 / 20 chance for the tree being cut to be defective(with 0 extra decent tree to be cut for compensation as compensating trees due to negligible damages caused by defects) Expected number of normal trees / tree cut = (20 - 1 = 19) / 20 Becomes a somehow sharp axe after 20 trees cut(a fully sharp axe will become a somehow sharp axe rather quickly) Somehow sharp axe(reasonably high effectiveness and efficiency; acceptable defect rates) - 1 tree cut / 2 hours 1 / 15 chance for the tree being cut to be defective(with 1 extra decent tree to be cut for compensation as compensating trees due to nontrivial but small damages caused by defects) Expected number of normal trees / tree cut = (15 - 1 - 1 = 13) / 15 Becomes a somehow dull axe after 80 trees cut(a somehow sharp axe will usually be much more resistant on having its sharpness reduced per tree cut than that of a fully sharp axe) Needs 36 hours of sharpening to become a fully sharp axe(no trees cut during the atomic process) Somehow dull axe(barely tolerable effectiveness and efficiency; alarming defect rates) - 1 tree cut / 4 hours 1 / 10 chance for the tree being cut to be defective(with 2 extra decent trees to be cut for compensation as compensating trees due to moderate but manageable damages caused by defects) Expected number of normal trees / tree cut = (10 - 1 - 2 = 7) / 10 Becomes a fully dull axe after 40 trees cut(a somehow dull axe is just ineffective and inefficient but a fully dull axe is significantly dangerous to use when cutting trees) Needs 12 hours of sharpening to become a somehow sharp axe(no trees cut during the atomic process) Fully dull axe(ridiculously poor effectiveness and efficiency; obscene defect rates) - 1 tree cut / 8 hours 1 / 5 chance for the tree being cut to be defective(with 3 extra decent trees to be cut for compensation as compensating trees due to severe but partially recoverable damages caused by defects) Expected number of normal trees / tree cut = (5 - 1 - 3 = 1) / 5 Becomes an irreversibly broken axe(way beyond repair) after 160 trees cut The harvester will resign if the axe keep being fully dull for 320 hours(no one will be willing to work that dangerously forever) Needs 24 hours of sharpening to become a somehow dull axe(no trees cut during the atomic process) Now, let's try to come up with some possible work schedules: Sharpens the axe to be fully sharp as soon as it becomes somehow sharp - Expected to have 19 normal trees and 1 defective tree cut after 1 * (19 + 1) = 20 hours(simplifying "1 / 20 chance for the tree being cut to be defective" to be "1 defective tree / 20 trees cut") Expected the axe to become somehow sharp now, and become fully sharp again after 36 hours Expected long term throughput to be 19 normal trees / (20 + 36 = 56) hours(around 33.9%) Sharpens the axe to be somehow sharp as soon as it becomes somehow dull - The initial phase of having the axe being fully sharp's skipped as it won't be repeated Expected to have 68 normal trees, 6 defective trees, and 6 compensating trees cut after 2 * (68 + 6 + 6) = 160 hours(simplifying "1 / 15 chance for the tree being cut to be defective" to be "1 defective tree / 15 trees cut" and using the worst case) Expected the axe to become somehow dull now, and become somehow sharp again after 12 hours Expected long term throughput to be 68 normal trees / (160 + 12 = 172) hours(around 39.5%) Sharpens the axe to be somehow dull as soon as it becomes fully dull - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") Expected the axe to become fully dull now, and become somehow dull again after 24 hours Expected long term throughput to be 28 normal trees / (160 + 24 = 184) hours(around 15.2%) Sharpens the axe to be somehow dull right before the harvester will resign - The initial phase of having the axe being fully or somehow sharp's skipped as it won't be repeated Expected to have 28 normal trees, 4 defective trees, and 8 compensating trees cut after 4 * (28 + 4 + = 160 hours(simplifying "1 / 10 chance for the tree being cut to be defective" to be "1 defective tree / 10 trees cut") when the axe's somehow dull Expected the axe to become fully dull now, and expected to have 4 normal trees, 8 defective trees, and 24 compensating trees but after 8 * (4 + 8 + 24) = 288 hours(simplifying "1 / 5 chance for the tree being cut to be defective" to be "1 defective tree / 5 trees cut" and using the worst case) when the axe's fully dull Expected total number of normal trees to be 28 + 4 = 32 Expected the axe to become somehow dull again after 24 hours(so the axe remained fully dull for 288 + 24 = 312 hours, which is the maximum before the harvester will resign) Expected long term throughput to be 32 normal trees / (160 + 312 = 472) hours(around 6.7%) Sharpens the axe to be fully sharp as soon as it becomes somehow dull - Expected total number of normal trees to be 19 + 68 = 87 Expected total number of hours to be 56 + 172 = 228 hours Expected long term throughput to be 87 normal trees / 228 hours(around 38.2%) Sharpens the axe to be fully sharp as soon as it becomes fully dull - Expected total number of normal trees to be 19 + 68 + 28 = 115 Expected total number of hours to be 56 + 172 + 184 = 412 hours Expected long term throughput to be 115 normal trees / 412 hours(around 27.9%) Sharpens the axe to be fully sharp right before the harvester will resign - Expected total number of normal trees to be 19 + 68 + 32 = 119 Expected total number of hours to be 56 + 172 + 472 = 700 hours Expected long term throughput to be 119 normal trees / 700 hours(17%) Sharpens the axe to be somehow sharp as soon as it becomes fully dull - Expected total number of normal trees to be 68 + 28 = 96 Expected total number of hours to be 172 + 184 = 356 hours Expected long term throughput to be 96 normal trees / 356 hours(around 26.9%) Sharpens the axe to be somehow sharp right before the harvester will resign - Expected total number of normal trees to be 68 + 32 = 100 Expected total number of hours to be 172 + 472 = 644 hours Expected long term throughput to be 100 normal trees / 644 hours(around 15.5%) So, while these work schedules clearly show that sharpening the axe's important to maintain effective and efficient long term throughput, trying to keep it to be always fully sharp is certainly going overboard(33.9% throughput), when being somehow sharp is already enough(39.5% throughput). Then why some bosses don't let the harvester sharpen the axe even when it's somehow or even fully dull? Because sometimes, a certain amount of normal trees have to be acquired within a set amount of time. Let's say that the axe has become from fully sharp to just somehow dull, so there should be 87 normal trees cut after 180 hours, netting the short term throughput of 48.3%. But then some emergencies just come, and 3 extra normal trees need to be delivered within 16 hours no matter what, whereas compensating trees can be delivered later in the case of having defective trees. In this case, there won't be enough time to sharpen the axe to be even just somehow sharp, because even in the best case, it'd cost 12 + 2 * 3 = 18 hours. On the other hand, even if there's 1 defective tree from using the somehow dull axe within that 16 hours, the harvester will still barely make it on time, because the chance of having 2 defective trees is (1 / 10) ^ 2 = 1 / 100, which is low enough to be neglected for now, and as compensatory trees can be delivered later even if there's 1 defective tree, the harvester will be able to deliver 3 normal trees. In reality, crunch modes like this will happen occasionally, and most harvesters will likely understand that it's probably inevitable eventually, so as long as these crunch modes won't last for too long, it's still practical to work under such circumstances once in a while, because it's just being reasonably pragmatic. However, in supposedly exceptional cases, the situation's so extreme that, when the harvester's about to sharpen the axe, the boss constantly requests that another tree must be acquired as soon as possible, causing the harvester to never have time to sharpen the axe for a long time, thus having to work more and more ineffectively and inefficiently in the long term. In the case of a somehow dull axe, 12 hours are needed to sharpen it to be somehow sharp, whereas another tree's expected to be acquired within 4 hours, because the chance of having a defective tree cut is 1 / 10, which can be considered small enough to take the risk, and the expected number of normal trees over all trees being cut is 7 of out 10 for a somehow dull axe, whereas 12 hours is enough to cut 3 trees by using such an axe, so at least 2 normal trees can be expected within this period. If this continues, eventually the axe will become fully dull, and 24 hours will be needed to sharpen it to be somehow dull, whereas another tree's expected to be acquired within 8 hours, because the chance of having a defective tree is 1 / 5, which can still be considered controllable to take the risk, especially with an optimistic estimation. While the expected number of normal trees over all trees being cut is 1 of out 5 for a fully dull axe, whereas 24 hours is just enough to cut 3 trees by using such an axe, meaning that the harvester's not expected to make it normally, in practice, the boss will usually unknowingly apply optimism bias(at least until it no longer works) by thinking that there will be no defective trees when just another tree's to be cut, so the harvester will still be forced to continue cutting trees, despite the fact that the axe should be sharpened as soon as possible even when just considering the short term. Also, if the boss can readily replace the current harvester with a new one immediately, the boss will rather let the current harvester resign than letting that harvester sharpening the axe to be at least somehow dull, because to the boss, it's always emergencies after emergencies, meaning that the short term's constantly so dire that there's just no room to even consider the long term at all. But why such an undesirable situation will be reached? Other than extreme and rare misfortunes, it's usually due to overly optimistic work schedules not seriously taking the existence of defective and compensatory trees, and the importance of the sharpness of the axe and the need of sharpening the axe into the account, meaning that such unrealistic work schedules are essentially linear(e.g.: if one can cut 10 trees on day one, then he/she can cut 1000 trees on day 100), which is obviously simplistic to the extreme. Occasionally, it can also be because of the inherent risks of sharpening the axe - Sometimes the axe won't be actually sharpened after spending 12, 24 or 36 hours, and while it's extraordinary, the axe might be actually even more dull than before, and most importantly, usually the boss can't directly judge the sharpness of the axe, meaning that it's generally hard for that boss to judge the ROI of sharpening the axe with various sharpness before sharpening, and it's only normal for the boss to distrust what can't be measured objectively by him/herself(on the other hand, normal, defective and compensatory trees are objectively measurable, so the boss will of course emphasize on these KPIs), especially for those having been opting for linear thinking. Of course, the whole axe cutting tree model is highly simplified, at least because: The axe sharpness deterioration isn't a step-wise function(an axe becomes from having a discrete level of sharpness to another such level after cutting a set number of trees), but rather a continuous one(gradual degrading over time) with some variations on the number of trees cut, meaning that when to sharpen the axe in the real world isn't as clear cut as that in the aforementioned model(usually it's when the harvester starts feeling the pain, ineffectiveness and inefficiency of using the axe due to unsatisfactory sharpness, and these feeling has last for a while) Not all normal trees are equal, not all defective trees are equal, and not all compensatory trees are equal(these complications are intentionally simplified in this model because these complexities are hardly measurable) The whole model doesn't take the morale of the harvester into account, except the obvious one that that harvester will resign for using a fully dull axe for too long(but the importance of sharpening the axe will only increase if morale has to be considered as well) In some cases, even when the axe's not fully dull, it's already impossible to sharpen it to be fully or even just somehow sharp(and in really extreme cases, the whole axe can just suddenly break altogether for no apparent reason) Nevertheless, this model should still serve its purpose of making this point across - There's isn't always an universal answer to when to sharpen the axe to reach which level of sharpness, because these questions involve calculations of concrete details(including those critical parts that can't be quantified) on a case-by-case basis, but the point remains that the importance of sharpening the axe should never be underestimated. When it comes to professional software engineering: The normal trees are like needed features that work well enough The defective trees are like nontrivial bugs that must be fixed as soon as possible(In general, the worse the code quality of the codebase is, the higher the chance to produce more bugs, produce bugs being more severe, and the more the time's needed to fix each bug with the same severity - More severe bugs generally cost more efforts to fix in the same codebase) The compensatory trees are like extra outputs for fixing those bugs and repairing the damages caused by them The axe is like the codebase that's supposed to deliver the needed features(actually, the axe can also be like those software engineers themselves, when the topic involved is software engineering team management rather than just refactoring) Sharpening the axe is like refactoring(or in the case of the axe referring to software engineers, sharpening the axe can be like letting them to have some vacations to recover from burnouts) A fully sharp axe is like a codebase suffering from the gold plating anti pattern on the code quality aspect(diminishing returns applies to code qualities as well), as if those professional software engineers can't even withstand a tiny amount of technical debt. On the good side, such an ideal codebase is the most unlikely to produce nontrivial bugs, and even when it does, they're most likely fixed with almost no extra efforts needed, because they're usually found way before going into production, and the test suite will point straight to their root causes. A somehow sharp axe is like a codebase with more than satisfactory code qualities, but not to the point of investing too much on this regard(and the technical debt is still doing more good than harm due to its amount under moderation). Such a practically good codebase is still a bit unlikely to produce nontrivial bugs regularly, but it does have a small chance to let some of them leak into production, causing a mild amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A somehow dull axe is like a codebase with undesirable code qualities, but it's still an indeed workable codebase(although it's still quite painful to work with) with a worrying yet payable amount of technical debt. Undesirable yet working codebases like this probably has a significant chance to produce nontrivial bugs frequently, and a significant chance for quite some of them to leak into production, causing a rather significant amount of extra efforts to be needed to fix the bugs and repair the damages caused by them. A fully dull axe is like a unworkable codebase where it must be refactored as soon as possible, because even senior professional software engineers can easily create more severe bugs than needed features with such a codebase(actually they'll be more and more inclined to rewrite the codebase the longer it's not refactored), causing their productivity to be even negative in the worst cases. An effectively broken codebase like this is guaranteed to has a huge chance to produce nontrivial bugs all the time, and nearly all of them will leak into production, causing an insane amount of extra efforts to be needed to fix the bugs and repair the damages caused by them(so the professionals will be always fixing bugs instead of delivering features), provided that these recovery moves can be successful at all. A broken axe is like a codebase being totally technical bankrupt, where the only way out is to completely rewrite the whole thing from scratch, because no one can fathom a thing in that codebase at that point, and sticking to such a codebase is undoubtedly a sunk cost fallacy. While a codebase with overly ideal code qualities can deliver the needed features in the most effective and efficient ways possible as long as the codebase remains in this state, in practice the codebase will quickly degrade from such an ideal state to a more practical state where the code qualities are still high(on the other hand, going back to this state is very costly in general no matter how effective and efficient the refactoring is), because this state is essentially mysophobia in terms of code qualities. On the other hand, a codebase with reasonably high code qualities can be rather resistant from code quality deterioration(but far from 100% resistant of course), especially when the professional software engineers are disciplined, experienced and qualified, because degrading code qualities for such codebases are normally due to quick but dirty hacks, which shouldn't be frequently needed for senior professional software engineers. To summarize, a senior professional software engineer should strive to keep the codebase to have a reasonably high code quality, but not to the point of not even having good technical debts, and when the codebase has eventually degraded to have just barely tolerable code quality, it's time to refactor it to become having very satisfactory, but not overly ideal, code quality again, except in the case of occasional crunch modes, where even a disciplined, experienced and qualified expert will have to get the hands dirty once in a while on the still workable codebase but with temporarily unacceptable code quality, just that such crunch modes should be ended as soon as possible, which should be feasible with a well-established work schedule.
  5. Abbreviations HID - High Information Density LID - Low Information Density HIV - High Information Volume LIV - Low Information Volume HID/HIV - Those who can handle both HID and HIV well HID/LIV - Those who can handle HID well but can only handle LIV well LID/HIV - Those who can only handle LID well but can handle HIV well LID/LIV - Those who can only handle LID and LIV well TL;DR(The Whole Article Takes About 30 Minutes To Read In Full Depth) Information Density A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. Unfortunately, different programmers have different capacities on facing information density. In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard. Information Volume A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. Unfortunately, different software engineers have different capacities on facing information volume, so it seems that the best way's to find a happy medium that can break a very long function into fathomable chunks on one hand, while still keeping the function call stack manageable on the other. In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard. Combining Information Density With Information Volume While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep Teams With Programmers Having Different Styles It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective. Information Density A Math Analogy Consider the following math formula that are likely learnt in high school(Euler's Formula): https://github.com/Double-X/Image-List/blob/master/1590658698206.png Most of those who've studied high school math well should immediately fathom this, but for those who don't, you may want to try to fathom this text equivalent, which is more verbose: The Euler number to the power of (the imaginary unit multiplied by theta in radian) equals cosine theta in radian plus the imaginary unit multiplied by sine theta in radian I hope that those who can't fathom the above formula can at least fathom the above text This brings the importance of information density: A small piece of information representation referring to a large piece of information content has HID, whereas a large piece of information representation referring to a small piece of information content has LID. For instance, the above formula has HID whereas the above text has LID. In this example, those who're good at math in general and high school math in particular will likely prefer the formula over the text equivalent as they can probably fathom the former instantly while feeling that the latter's just wasting their time; Those who're bad at math in general and high school math in particular will likely prefer the text equivalent over the formula as they might not even know the fact that cisx is the short form of cosx + isinx. For those who can handle HID well, even if they don't know what Euler number is at all, they should still be able to deduce these corollaries within minutes if they know what cisx is: https://github.com/Double-X/Image-List/blob/master/1590660502890.png But for those who can only handle LID well, they'll unlikely be able to know what's going on at all, even if they know how to use the binomial theorem and the truncation operator. Now let's try to fathom this math formula that can be fathomed using just high school math: https://github.com/Double-X/Image-List/blob/master/1590661116897.png While it doesn't involve as much math knowledge nor concepts as those in the Euler's Formula, I'd guess that only those who're really, really exceptional in high school math and math in general can fathom this within seconds, let alone instantly, all because of this formula having such a ridiculously HID. If you can really fathom this instantly, then I'd think that you can really handle very HID very well, especially when it comes to math So what if we try to explain this by text? I'd come up with the following try: (The summation of m variables from x1 to xm) to the power of n equals the summation of (n elements, each being the combination of selecting r elements from n - 1 elements, where r is the outermost summation counter from 0 to n - 1, multiplied by the summation of (m elements, each being xi to the power of n - r, where i is the middle summation counter from 1 to m, multiplied by (the summation of m variables from x1 to xm except xi) to the power of r)) Maybe you can finally fathom what this formula is, but still probably not what it really means nor how to use it meaningfully, let alone deducing any useful corollary. However, with the text version, at least we can clearly see just how high the information density is in that formula, as even the information density for the text version isn't actually anything low. These 2 math examples aim to show that, HID, as long as being kept in moderation, is generally preferred over the LID counterparts. But once the information density becomes too unnecessarily and unreasonably high, the much more verbose versions seeming to be too verbose is actually preferred in general, especially when their information density isn't low. Some Examples Showing HID vs LID There are programming parallels to the above math analogy: terse and verbose codes. Unfortunately, different programmers have different capacities on facing information density, just like different people have different capacities on fathoming math. For instance, the ternary operator is a very obvious terse example on this(Javascript ES5): var x = condition1 ? value1 : condition2 ? value2 : value3; Whereas a verbose if/else if/else equivalent can be something like this: var x; if (condition1 === true) { x = value1; } else if (condition2 === true) { x = value2; } else { x = value3; } Those who're used to read and write terse codes will likely like the ternary operator version as the if/else if/else version will likely be just too verbose for them; Those who're used to read and write verbose codes will likely like the if/else if/else version as the ternary operator version will likely be just too terse for them(I've seen production codes with if (variable === true), so don't think that the if/else if/else version can only be totally made up examples). In this case, I've worked with both styles, and I guess that most programmers can handle both. Similarly, Javascript and some other languages support short circuit evaluation, which is also a terse style. For instance, the || and && operators can be short circuited this way: return isValid && (array || []).concat(object || canUseDefault && default); Where a verbose equivalent can be something like this(it's probably too verbose anyway): var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; Clearly the terse version has very HID while the verbose version has very LID. Those who can handle HID well will likely fathom the terse version instantly while needing minutes just to fathom what the verbose version's really trying to achieve and why it's not written in the terse version to avoid wasting time to read so much code doing so little meaningful things; Those who can only handle LID well will likely fathom the verbose version within minutes while probably giving up after trying to fathom the terse version for seconds and wonder what's the point of being concise when it's doing just so many things in just 1 line. In this case, I seriously suspect whether anyone fathoming Javascript will ever write in the verbose version at all, when the terse version is actually one of the popular idiomatic styles. Now let's try to fathom this really, really terse codes(I hope you won't face this in real life): for (var texts = [], num = min; num <= max; num += increment) { var primeMods = primes.map(function(prime) { return num % prime; }); texts.push(primeMods.reduce(function(text, mod, i) { return (text + (mod || words[i])).replace(mod, ""); }, "") || num); } return texts.join(textSeparator); If you can fathom this within seconds or even instantly, then I'd admit that you can really handle ridiculously HID exceptionally well. However, adding these lines will make it clear: var min = 1, max = 100, increment = 1; var primes = [3, 5], words = ["Fizz", "Buzz"], textSeparator = "\n"; So all it's trying to do is the very, very popular Fizz Buzz programming test in a ridiculously terse way. So let's try this much more verbose version of this Fizz Buzz programming test: var texts = []; for (var num = min; num <= max; num = num + increment) { var text = ""; var primeCount = primes.length; for (var i = 0; i < primeCount; i = i + 1) { var prime = primes[i]; var mod = num % prime; if (mod === 0) { var word = words[i]; text = text + word; } } if (text === "") { texts.push(num); } else { texts.push(text); } } return texts.join(textSeparator); Even those who can handle very HID well should still be able to fathom this verbose version within seconds, so do those who can only handle very LID well. Also, considering the inherent complexity of this generalized Fizz Buzz, the verbose version doesn't have much boilerplate, even when compared to the terse version, so I don't think those who can handle very HID well will complain about the verbose version much. On the other hand, I doubt whether those who can only handle very LID well can even fathom the terse version, let alone in a reasonable amount of time(like minutes), if I didn't tell that it's just Fizz Buzz. In this case, I really doubt what's the point of writing in the terse version when I don't see any nontrivial issue in the verbose version(while the terse version's likely harder to fathom). Back To The Math Analogy Imagine that a mathematician and math professor who's used to teach postdoc math now have to teach high school math to elementary math students(I've heard that a very small amount of parents are so ridiculous to want their elementary children to learn high school math even when those children aren't interested in nor good at math). That's almost mission impossible, but all that teacher can do is to first consolidate the elementary math foundation of those students while fostering their interest in math, then gradually progress to middle school math, and finally high school math once those students are good at middle school math. All those students can do is to work extremely hard to catch up such great hurdles. Unfortunately, it seems to me that it'd take far too much resources, especially time, when those who can handle very HID well try to teach those who can only handle very LID well to handle HID. Even when those who can only handle very LID well can eventually be nurtured to meet the needs imposed by the codebase, it's still unlikely to be worth it, especially for software teams with very tight budgets, no matter how well intentioned it is. So should those who can only handle very LID well train up themselves to be able to handle HID? I hope so, but I doubt that it's similar to asking a high school student to fathom postdoc math. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to apply actually basic math formulae that are just written in terse styles; Should those who can handle very HID well learn how to deal with LID well as well? I hope so, but I doubt that's similar to asking mathematicians to abandon their mother tongue when it comes to math(using words instead of symbols to express math). While it's possible, I still guess that most of us will think that it's so excessively ineffective and inefficient just to communicate with those who're very poor at math when discussing about advanced math. So it seems that maybe those who can handle HID well and those who can only handle LID well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HID well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HID well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HID well, which is overlooked The candidate screening process must be able to tell who can handle HID well Most importantly, the team must be able to hire enough candidates who can handle HID well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HID well should try their best to only touch the HID part of the codebase, while those who can only handle very LID well should try their best to only touch the LID part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. A Considerable Solution With an IDE supporting collapsing comments, one can try something like this: /* var returnedValue; if (isValid === true) { var returnedArray; var isValidArray = (array !== null) && (array !== undefined); if (isValidArray === true) { returnedArray = array; } else { returnedArray = []; } var pushedObject; var isValidObject = (object !== null) && (object !== undefined); if (isValidObject === true) { pushedObject = object; } else if (canUseDefault === true) { pushedObject = default; } else { pushedObject = canUseDefault; } if (Array.isArray(pushedObject) === true) { returnedArray = returnedArray.concat(pushedObject); } else { returnedArray = returnedArray.concat([pushedObject]); } returnedValue = returnedArray; } else { returnedValue = isValid; } return returnedValue; */ return isValid && (array || []).concat(object || canUseDefault && default); Of course it's not practical when the majority of the codebase's so terse that those who can only handle very LID well will struggle most of the time, but those who can handle very HID well can try to do the former some favors when there aren't lots of terse codes for them. The point of this comment's to be a working compromise between the needs of reading codes effectively and efficiently for those who can handle very HID well, and the needs of fathoming code easily and simply for those who can only handle very LID well. Summary In general, those who can handle very HID well will prefer very terse codes, as it'll be more effective and efficient to both write and read them that way for such software engineers, while writing and reading verbose codes are just wasting their time in their perspectives; Those who can only handle very LID well will prefer very verbose codes, as it'll be easier and simpler to both write and read them that way for such software engineers, while writing and reading terse codes are just too complicated and convoluted in their perspectives. Ideally, we should be able to handle very HID well while still being very tolerant towards LID, so we'd be able to work well with codes having all kinds of information density. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies, so all we can do is to try hard. Information Volume An Eating Analogy Let's say we're ridiculously big eaters who can eat 1kg of meat per meal. But can we eat all that 1kg of meat in just 1 chunk? Probably not, as our mouth just won't be big enough, so we'll have to cut it into digestible chunks. However, can we eat it if it becomes a 1kg of very fine-grained meat powder? Maybe, but that's likely daunting or even dangerous(extremely high risk of severe choking) for most of us. So it seems that the best way's to find a happy medium that works for us, like cutting it into chunks that are just small enough for our mouth to digest. There might still be many chunks but at least they'll be manageable enough. The same can be largely applied to fathoming codes, even though there are still differences. Let's say you're reading a well-documented function with 100k lines and none of its business logic are duplicated in the entire codebase(so breaking this function won't help code reuse right now). Unless we're so good at fathoming big functions that we can keep all these 100k lines of implementation details in our head as a whole, reading such a function will likely be daunting or even dangerous(extremely high risk of fathom it all wrong) for most of us, assuming that we can indeed fathom it within a feasible amount of time(like within hours). On the other hand, if we break that 100k line function into extremely small functions so that the function call stack can be as deep as 100 calls, we'll probably be in really big trouble when we've to debug these functions having bugs that don't have apparently obvious causes nor caught by the current test suite(no test suite can catch all bugs after all). After all, traversing such a deep call stack without getting lost and having to start all over again is like eating tons of very fine-grained meat powders without ever choking severely. Even if we can eventually fix all those bugs with the test suite updated, it'll still unlikely to be done within a reasonable amount of time(talking about days or even weeks when the time budget is tight). This brings the importance of information volume: A code chunk having a large piece of information content that aren't abstracted away from that code chunk has HIV, whereas a code chunk having only a small piece of information content that aren't abstracted away from that code chunk has LIV. For instance, the above 100k line function has HIV whereas the above small functions with deep call stack has LIV. So it seems that the best way's to find a happy medium that can break that 100k line function into fathomable chunks on one hand, while still keeping the call stack manageable on the other. For instance, if possible, breaking that 100k line function into those in which the largest ones are 1k line functions and the ones with the deepest call stack is 10 calls can be a good enough balance. While fathoming a 1k line function is still hard for most of us, it's at least practical; While debugging functions having call stacks with 10 calls is still time-consuming for most of us, it's at least realistic to be done within a tight budget. A Small Example Showing HIV vs LIV Unfortunately, different software engineers have different capacities on facing information volume, just like different people have different mouth size. Consider the following small example(Some of my Javascript ES5 codes with comments removed): LIV Version(17 methods with the largest being 4 lines and the deepest call stack being 11) - $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) { var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { return this._associativeResult(note, argObj_, funcNameSuffix); } return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) { var partResults = this._partResults(note, argObj_, funcNameSuffix); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) { var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) { var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) { var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) { var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; HIV Version(10 methods with the largest being 12 lines and the deepest call stack being 5) - $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedResultWithCache START var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; // $._updatedResultWithCache END }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); var partResults = priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); // $._nonAssociativeResult END }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedPartResultWithCache_ START var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; // $._updatedPartResultWithCache_ END }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); if (cache) return cache; // $._updatedPairFuncListWithCache START var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; // $._updatedPairFuncListWithCache END }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); if (cache) return cache; // $._updatedPairFuncListPartWithCache START var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; // $._updatedPairFuncListPartWithCache END }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; In case you can't fathom what this example's about, you can read this simple flow chart(It doesn't mention the fact that the actual codes also handle whether the cache will be used): Even though the underlying business logic's easy to fathom, different people will likely react to the HIV and LIV Version differently. Those who can handle very HIV well will likely find the LIV version less readable due to having to unnecessarily traverse all these excessively small methods(the smallest ones being 1 liners) and enduring the highest call stack of 11 calls(from $.result to $._pairFuncListPartWithoutCache); Those who can only handle very LIV well will likely find the HIV version less readable due to having to unnecessarily fathom all these excessively mixed implementation details as a single unit in one go from the biggest method with 12 lines and enduring the presence of 3 different levels of abstractions combined just in the biggest and most complex method($._uncachedResult). Bear in mind that it's just a small example which is easy to fathom and simple to explain, so the differences between the HIV and LIV styles and the potential conflicts between those who can handle very HIV well and those who can only handle very LIV well will only be even larger and harder to resolve when it comes to massive real life production codebases. Back To The Eating Analogy Imagine that the size of the mouth of various people can vary so much that the largest digestible chunk of those with the smallest mouth are as small as a very fine-grained powder in the eyes of those with the largest mouth. Let's say that these 2 extremes are going to eat together sharing the same meal set. How should these meals be prepared? An obvious way's to give them different tools to break these meals into digestible chunks of sizes suiting their needs so they'll respectively use the tools that are appropriate for them, meaning that the meal provider won't try to do these jobs themselves at all. It's possible that those with the smallest mouth will happily break those meals into very fine-grained powders, while those with the largest mouth will just eat each individual food as a whole without much trouble. Unfortunately, it seems to me that there's still no well battle-tested automatic tools that can effectively and efficiently break a large code chunk into well-defined smaller digestible code chunks with configurable size and complexity without nontrivial side effects, so those who can only handle very LIV well will have to do it manually when having to fathom large functions. Also, even when there's such a tool, such automatic work's still effectively refactoring that function, thus probably irritating colleagues who can handle very HIV well. So should those who can only handle very LIV well train up themselves to be able to deal with HIV? I hope so, but I doubt that's similar to asking those with very small mouths to increase their mouth size. While it's possible, I still guess that most of us will think that it's so costly and disproportional just to eat foods in chunks that are too large for them; Should those who can handle very HIV well learn how to deal with LIV well as well? I hope so, but I doubt that's similar to asking those with very large mouths to force themselves to eat very fine-grained meat powders without ever choking severely(getting lost when traversing a very deep call stack). While it's possible, I still guess that most of us will think that it's so risky and unreasonable just to eat foods as very fine-grained powders unless they really have no other choices at all(meaning that they should actually avoid these as much as possible). So it seems that maybe those who can handle HIV well and those who can only handle LIV well should avoid working with each other as much as possible. But that'd mean all these: The current software team must identify whether the majority can handle HIV well or can only handle LIV well, which isn't easy to do and most often totally ignored The software engineering job requirement must state that whether being able to deal with HIV well will be prioritized or even required, which is an uncommon statement All applicants must know whether they can handle HIV well, which is overlooked The candidate screening process must be able to tell who can handle HIV well Most importantly, the team must be able to hire enough candidates who can handle HIV well, and it's obvious that many software teams just won't be able to do that Therefore, I don't think it's an ideal or even reasonable solution, even though it's possible. Alternatively, those who can handle very HIV well should try their best to only touch the HIV part of the codebase, while those who can only handle very LIV well should try their best to only touch the LIV part of the codebase. But needless to say, that's way easier said than done, especially when the team's large and the codebase can't be really that modular. An Imagined Solution Let's say there's an IDE which can display the function calls in the inlined form, like from: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { return this._uncachedResult(note, argObj_, "WithoutCache"); } return this._updatedResult(note, argObj_); }; $._updatedResult = function(note, argObj_) { var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedResultWithCache(note, argObj_); }; $._updatedResultWithCache = function(note, argObj_) { var result = this._uncachedResult(note, argObj_, "WithCache"); this._cache.updateResult(note, argObj_, result); return result; }; $._uncachedResult = function(note, argObj_, funcNameSuffix) { if (this._rules.isAssociative(note)) { return this._associativeResult(note, argObj_, funcNameSuffix); } return this._nonAssociativeResult(note, argObj_, funcNameSuffix); }; $._associativeResult = function(note, argObj_, funcNameSuffix) { var partResults = this._partResults(note, argObj_, funcNameSuffix); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); }; $._partResults = function(note, argObj_, funcNameSuffix) { var priorities = this._rules.priorities(note); var funcName = "_partResult" + funcNameSuffix + "_"; var resultFunc = this[funcName].bind(this, note, argObj_); return priorities.map(resultFunc).filter(_SATB.IS_VALID_RESULT); }; $._partResultWithoutCache_ = function(note, argObj_, part) { return this._uncachedPartResult_(note, argObj_, part, "WithoutCache"); }; $._partResultWithCache_ = function(note, argObj_, part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; return this._updatedPartResultWithCache_(note, argObj_, part); }; $._updatedPartResultWithCache_ = function(note, argObj_, part) { var result = this._uncachedPartResult_(note, argObj_, part, "WithCache"); this._cache.updatePartResult(note, argObj_, part, result); return result; }; $._uncachedPartResult_ = function(note, argObj_, part, funcNameSuffix) { var list = this["_pairFuncListPart" + funcNameSuffix](note, part); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }; $._nonAssociativeResult = function(note, argObj_, funcNameSuffix) { var list = this["_pairFuncList" + funcNameSuffix](note); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult(list, note, argObj_, defaultResult); }; $._pairFuncListWithoutCache = function(note) { return this._uncachedPairFuncList(note, "WithoutCache"); }; $._pairFuncListWithCache = function(note) { var cache = this._cache.pairFuncList_(note); return cache || this._updatedPairFuncListWithCache(note); }; $._updatedPairFuncListWithCache = function(note) { var list = this._uncachedPairFuncList(note, "WithCache"); this._cache.updatePairFuncList(note, list); return list; }; $._uncachedPairFuncList = function(note, funcNameSuffix) { var funcName = "_pairFuncListPart" + funcNameSuffix; return this._rules.priorities(note).reduce(function(list, part) { return list.concat(this[funcName](note, part)); }.bind(this), []); }; $._pairFuncListPartWithCache = function(note, part) { var cache = this._cache.pairFuncListPart_(note, part); return cache || this._updatedPairFuncListPartWithCache(note, part); }; $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._pairFuncListPartWithoutCache(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; $._pairFuncListPartWithoutCache = function(note, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); return this._cache.partListData(part, this._battler).map(func); }; To be displayed as something like this: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { // $._uncachedResult START if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { // $._partResultWithoutCache START // $._uncachedPartResult_ START // $._pairFuncListPartWithoutCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); // $._pairFuncListPartWithoutCache END if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); // $._uncachedPartResult_ END // $._partResultWithoutCache END }).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START // $._pairFuncListWithoutCache START // $._uncachedPairFuncList START var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { // $._pairFuncListPartWithoutCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var l = this._cache.partListData( part, this._battler).map(func); // $._pairFuncListPartWithoutCache END return list.concat(l); }.bind(this), []); // $._uncachedPairFuncList END // $._pairFuncListWithoutCache END var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult); // $._nonAssociativeResult END // $._uncachedResult END } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedResultWithCache START // $._uncachedResult START var result; if (this._rules.isAssociative(note)) { // $._associativeResult START // $._partResults START var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { // $._partResultWithCache START var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; // $._updatedPartResultWithCache_ START // $._uncachedPartResult_ START // $._pairFuncListPartWithCache START var c = this._cache.pairFuncListPart_(note, part); var list; if (c) { list = c; } else { // $._updatedPairFuncListPartWithCache START // $._uncachedPairFuncListPart START var func = this._pairs.pairFuncs.bind(this._pairs, note); list = this._cache.partListData( part, this._battler).map(func); // $._uncachedPairFuncListPart END this._cache.updatePairFuncListPart(note, part, list); // $._updatedPairFuncListPartWithCache END } // $._pairFuncListPartWithCache END var result = undefined; if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_); } // $._uncachedPartResult_ END this._cache.updatePartResult(note, argObj_, part, result); return result; // $._updatedPartResultWithCache_ END // $._partResultWithCache END }).filter(_SATB.IS_VALID_RESULT); // $._partResults END var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult); // $._associativeResult START } // $._nonAssociativeResult START // $._pairFuncListWithCache START var cache = this._cache.pairFuncList_(note), list; if (cache) { list = cache; } else { // $._updatedPairFuncListWithCache START // $._uncachedPairFuncList START var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { // $._pairFuncListPartWithCache START var cache = this._cache.pairFuncListPart_(note, part); var l; if (cache) { l = cache; } else { // $._updatedPairFuncListPartWithCache START // $._uncachedPairFuncListPart START var func = this._pairs.pairFuncs.bind(this._pairs, note); l = this._cache.partListData( part, this._battler).map(func); // $._uncachedPairFuncListPart END this._cache.updatePairFuncListPart(note, part, l); // $._updatedPairFuncListPartWithCache END } return list.concat(l); // $._pairFuncListPartWithCache END }.bind(this), []); // $._uncachedPairFuncList END this._cache.updatePairFuncList(note, list); // $._updatedPairFuncListWithCache END } // $._pairFuncListWithCache END var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult); // $._nonAssociativeResult END // $._uncachedResult END this._cache.updateResult(note, argObj_, result); return result; // $._updatedResultWithCache END }; Or this one without comments indicating the starts and ends of the inlined functions: $.result = function(note, argObj_) { if (!$gameSystem.satbParam("_isCached")) { if (this._rules.isAssociative(note)) { var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); if (list.length <= 0) return undefined; return this._rules.chainedResult(list, note, argObj_); }).filter(_SATB.IS_VALID_RESULT); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( partResults, note, argObj_, defaultResult); } var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { var func = this._pairs.pairFuncs.bind(this._pairs, note); var l = this._cache.partListData( part, this._battler).map(func); return list.concat(l); }.bind(this), []); var defaultResult = this._pairs.default(note, argObj_); return this._rules.chainedResult( list, note, argObj_, defaultResult); } var cache = this._cache.result_(note, argObj_); if (_SATB.IS_VALID_RESULT(cache)) return cache; var result; if (this._rules.isAssociative(note)) { var priorities = this._rules.priorities(note); var partResults = priorities.map(function(part) { var cache = this._cache.partResult_(note, argObj_, part); if (_SATB.IS_VALID_RESULT(cache)) return cache; var c = this._cache.pairFuncListPart_(note, part); var list; if (c) { list = c; } else { var func = this._pairs.pairFuncs.bind(this._pairs, note); list = this._cache.partListData( part, this._battler).map(func); this._cache.updatePairFuncListPart(note, part, list); } var result = undefined; if (list.length > 0) { result = this._rules.chainedResult(list, note, argObj_); } this._cache.updatePartResult(note, argObj_, part, result); return result; }).filter(_SATB.IS_VALID_RESULT); var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult( partResults, note, argObj_, defaultResult); } var cache = this._cache.pairFuncList_(note), list; if (cache) { list = cache; } else { var priorities = this._rules.priorities(note); var list = priorities.reduce(function(list, part) { var cache = this._cache.pairFuncListPart_(note, part); var l; if (cache) { l = cache; } else { var func = this._pairs.pairFuncs.bind(this._pairs, note); l = this._cache.partListData( part, this._battler).map(func); this._cache.updatePairFuncListPart(note, part, l); } return list.concat(l); }.bind(this), []); this._cache.updatePairFuncList(note, list); } var defaultResult = this._pairs.default(note, argObj_); result = this._rules.chainedResult(list, note, argObj_, defaultResult); this._cache.updateResult(note, argObj_, result); return result; }; With just 1 click on $.result. Bear in mind that the actual codebase hasn't changed one bit, it's just that the IDE will display the codes from the original LIV form to the new HIV form. The goal this feature's to keep the codebase in the LIV form, while still letting those who can handle HIV well to be able to read the codebase displayed in the HIV version. It's very unlikely for those who can only handle very LIV well to be able to fathom such a complicated and convoluted method with 73 lines and so many different levels of varying abstractions and implementation details all mixed up together, not to mention the really vast amount of completely needless code duplication that aren't even easy nor simple to spot fast; Those who can handle very HIV well, however, may feel that a 73 line method is so small that they can hold everything inside in their head as a whole very quickly without a hassle. Of course, one doesn't have to show everything at once, so besides the aforementioned feature that inlines everything in the reading mode with just 1 click, the IDE should also support inlining a function at a time. Let's say we're to reveal _uncachedPairFuncListPart: $._updatedPairFuncListPartWithCache = function(note, part) { var list = this._uncachedPairFuncListPart(note, part); this._cache.updatePairFuncListPart(note, part, list); return list; }; Clicking that method name in the above method should lead to something like this: $._updatedPairFuncListPartWithCache = function(note, part) { // $._updatedPairFuncListPartWithCache START var func = this._pairs.pairFuncs.bind(this._pairs, note); var list = this._cache.partListData( part, this._battler).map(func); // $._updatedPairFuncListPartWithCache END this._cache.updatePairFuncListPart(note, part, list); return list; }; Similarly, clicking the method name updatePairFuncListPart should reveal the implemention details of that method of this._cache, provided that the IDE can access the code of that class. Such an IDE, if even possible in the foreseeable future, should at least reduce the severity of traversing a deep call stack with tons of small functions for those who can handle very HIV well, if not removing the problem entirely, without forcing those who can only handle very LIV well to deal with HIV, and without the issue of fighting for refactoring in this regard. Summary In general, those who can handle very HIV well will prefer very long functions, as it'll be more effective and efficient to draw the full picture without missing any nontrivial relevant detail that way for such software engineers, while writing and reading very short functions are just going the opposite directions in their perspectives; Those who can only handle very LIV well will prefer very short functions, as it'll be easier and simpler to reason about well-defined abstractions(as long as they don't leak in nontrivial ways) that way for such software engineers, while writing and reading long functions are just going the opposite directions in their perspectives. Ideally, we should be able to handle very HIV well while still being very tolerant towards LIV, so we'd be able to work well with codes having all kinds of information volume. Unfortunately, very effective and efficient software engineers are generally very intolerant towards extreme ineffectiveness or inefficiencies(especially when those small function abstractions do leak in nontrivial ways), so all we can do is to try hard. Combining Information Density With Information Volume Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Imagine that you're reading a well-documented 100k line function where almost every line's written like some of the most complex math formulae. I'd guess that even the best of the best software engineers will never ever want to touch this perverted beast again in their lives. Usually such codebase are considered dead and will thus be probably rewritten from scratch. Of course, HID + HIV isn't always this extreme, as the aforementioned 73 line version of $.result also falls into this category. Even though it'd still be a hellish nightmare for most software engineers to work with if many functions in the codebase are written this way, it's still feasible to refactor them into very high quality code within a reasonably tight budget if we've the highest devotions, diligence and disciplines possible. While such an iron fist approach should only be the last resort, sometimes the it's called for so we should be ready. Nevertheless, try to avoid HID + HIV as much as possible, unless the situation really, really calls for it, like optimizing a massive production codebase to death(e.g.: gameplay codes), or when the problem domain's so chaotic and unstable that no sane nor sensible architecture will survive for even just a short time(pathetic architectures can be way worse than none). If you still want to use this style even when it's clearly unnecessary, you should have the most solid reasons and evidence possible to prove that it's indeed doing more good than harm. Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First For instance, the below codes falls into this category: return isValid && (array || []).concat(object || canUseDefault && default); Imagine that you're reading a codebase having mostly well-defined and well-documented small functions(but far from being mostly 1 liners) but most of those small functions are written like some the most complex math formulae. While fathoming such codes at first will be very difficult, because the functions are well-documented, those functions will be easy to edit once you've fathomed it with the help of those comments; Because the functions are small enough and well-defined, those functions will be easy to use once you've fathomed how they're being called with the help of those callers who're themselves high quality codes. Of course, HID + LIV doesn't always mean small short term pains with large long term pleasures, as it's impossible to ensure that none of those abstractions will ever leak in nontrivial ways. While the codebase will be easy to work with when it only ever has bugs that are either caught by the test suite or have at least some obvious causes, such codebase can still be daunting to work with once it produces rare bugs that are hard to even reproduce, all because of the fact that it's very hard to form the full pictures with every last bit of nontrivial relevant detail of massive codebases having mostly small but very terse functions. Nevertheless, as long as all things are kept in moderation(one can always try in this regard), HID + LIV is generally advantageous as long as the codebase's large enough to warrant large scale software architectures and designs(the lifespan of the codebase should also be long enough), but not so large that no one can form the full picture anymore, as the long term pleasures will likely be large and long enough to outweigh short term pains a lot here. Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Think of an extremely verbose codebase having full of boilerplate and exceptionally long functions. Maybe those functions are long because of the verbosity, but you usually can't tell before actually reading them all. Anyway, you'll probably feel that the codebase's just wasting lots of your time once you realize that most of those long functions aren't actually doing much. Think of the aforementioned 28 line verbose Javascript examples having an extremely easy, simple and small terse 1 line counterpart, and think of the former being ubiquitous in the codebase. I guess that even the most verbose software engineers will want to refactor it all, as working with it'd just be way too ineffective and inefficient otherwise. Of course, LID + HIV isn't always that bad, especially when things are kept in moderation. At least, it'd be nice for most newcomers to fathom the codebase, so codebases written in this style can actually be very beginner-friendly, which is especially important for software teams having very high turnover rates. Even though it's unlikely to be able to work with such codebase effectively nor efficiently no matter how much you've fathomed it due to the heavy verbosity and loads of boilerplate, the problem will be less severe if it's short-lived. Also, writing codes in this style can be extremely fast at first, even though it'll gradually become slower and slower, so this style's very useful in at least prototyping/making PoCs. Nevertheless, LID + HIV shouldn't be used on codebases that'd already be very large without the extra verbosity nor boilerplate, especially when it's going to have a very long life span. Just think of a codebase that can be controlled into the 100k scale all with very terse codes(but still readable), but reaching the 10M scale because of complete refactoring of all those terse codes into tons of verbose codes with boilerplate. Needless to say, almost no one will continue on this road if he/she knows that the codebase will become that large that way. Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep For instance, the below codes fall into this category: /* This is the original codes $._chainedResult = function(list, note, argObj_, initVal_) { var chainedResultFunc = this._rules.chainResultFunc(note); return chainedResultFunc(list, note, argObj_, initVal_); }; */ // This is the refactored codes $._chainedResult = function(list, note, argObj_, initVal_) { var chainedResultFunc = this._chainedResultFunc(note); return this._runChainedResult( list, note, argObj_, initVal_, chainedResultFunc); }; $._chainedResultFunc = function(note) { return this._rules.chainResultFunc(note); }; $._runChainedResult = function(list, note, argObj_, initVal_, resultFunc) { return resultFunc(list, note, argObj_, initVal_); }; // Think of a codebase with less than 100k lines but with already way more than 1k classes/interfaces and 10k functions/methods. It's almost a given that the deepest call stack in the codebase will be so deep that it can even approach the 100 call mark. It's because the only way for very small functions to be very verbose with tons of boilerplate is that most of those small functions aren't actually doing anything meaningful. We're talking about deeply nested delegates/forwarding functions which are all indeed doing very easy, simple and small jobs, and tons of interfaces or explicit dependencies having only 1 implementation or concrete dependency(configurable options with only 1 option ever used also has this issue). Of course, LID + LIV does have its places, especially when the business requirements always change so abruptly, frequently and unpredicably that even the most reasonable assumptions can be suddenly violated without any reason at all(I've worked with 1 such project). As long as there can still be sane and sensible architectures that can last very long, if the codebase isn't flexible in almost every direction, the software teams won't be able to make it when they've to implement absurd changes with ridiculously tight budgets and schedules. And the only way for the codebase to be possible to be so flexible is to have as many well-defined interfaces and seams as possible, as long as everything else is still in moderation. For the newcomers, the codebase will seem to be overengineered over nothing already happened, but that's what you'd likely do when you can never know what's invariant. Nevertheless, LID + LIV should still be refactored once there are solid reasons and evidences to prove that the codebase can begin to stablize, or the hidden technical debt incurred from excessive overengineering can quickly accumulate to the point of no return. At that point, even understanding the most common call stack can be almost impossible. Of course, if the codebase can really never stablize, then one can only hope for the best and be prepared for the worst, as such projects are likely death marches, or slowly becoming one. Rare exceptions are that, some codebases have to be this way, like the default RPG Maker MV codebase, due to the business model that any RPG Maker MV user can have any feature request and any RPG Maker MV plugin developer can develop any plugin with any feature. Summary While information density and volume are closely related, there's no strict implications from one to the other, meaning that there are different combinations of these 2 factors and the resultant style can be very different from each other. For instance, HID doesn't imply LIV nor vice versa, as it's possible to write a very terse long function and a very verbose short function; LID doesn't imply HIV nor vice versa for the very same reasons. In general, the following largely applies to most codebases, even when there are exceptions: Very HID + HIV = Massive Ball Of Complicated And Convoluted Spaghetti Legacy Very HID + LIV = Otherwise High Quality Codes That Are Hard To Fathom At First Very LID + HIV = Excessively Verbose Codes With Tons Of Redundant Boilerplate Very LID + LIV = Too Many Small Functions With The Call Stacks Being Too Deep Teams With Programmers Having Different Styles Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes While both can work with very HID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The latter values codebase quality over software engineer mental capacity due to their limits on taking information volume, while the former values the opposite due to their exceptionally strong mental power. Thus the former will likely think of the latter as being too weak to fathom the codes and they're thus the ones to blame, while the latter will probably think of the former as having too little architecture in mind and they're thus the ones to blame, as architectures that are beneficial or even necessary for the latter will probably be severe obstacles for the former. Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things While both can work with very HIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values function simplicity over function capabilities due to their limits on taking information density, while the former values the opposite due to their extremely strong information density decoding. Thus the former will likely think of the latter as doing too little things that actually matter in terms of important business logic as simplicity for the latter means time wasted for the former, while the latter will probably think of the former as being too needlessly complex when it comes to implementing important business logic, as development speed for the former means complexity that are just too high for the latter(no matter how hard they try). Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak It's clear that these 2 groups are at the complete opposites - The former preferring massive balls of complicated and convoluted spaghetti legacy over too many small functions with the call stacks being too deep due to the heavy need of optimizing the codebase to death, while the latter preferring the opposite due to the heavy need of making the codebase very flexible. Thus the former will likely think of the latter as over-engineering freaks while the latter will probably think of the former as over-optimization freaks, as codebase optimization and flexibility are often somehow at odds with each other, especially when one is heavily done. Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose It's clear that these 2 groups are at the complete opposites - The former preferring otherwise high quality codes that are hard to fathom at first over excessively verbose codes with tons of redundant boilerplate due to the heavy emphasis on the large long term pleasures, while the latter preferring the opposite due to the heavy emphasis on the small short term pains. Thus the former will likely think of the latter as being too messy and verbose while the latter will probably think of the former as being too concise and organized, as long term pleasures from the high codebase qualities are often at odds with short term pains of the newcomers fathoming the codebase at first, especially when one is heavily emphasized over the other. Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient While both can only work with very LIV well, their different capacities and takes on information density can still cause them to have ongoing significant conflicts. The latter values the learning cost over maintenance cost(the cost of reading already fathomed codes during maintenance) due to their limits on taking information density, while the former values the opposite due to their good information density skill and reading speed demands. Thus the former will likely think of the latter as being too ineffective and inefficient when writing codes that are easy to fathom in the short term but time-consuming to read in the long term, while the latter will likely think of the former as being too harsh to newcomers when writing codes that are fast to read in the long term but hard to fathom in the short term. Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles While both can only work with very LID well, their different capacities and takes on information volume can still cause them to have ongoing significant conflicts. The former values codebase beginner friendliness over software flexibility due to their generally lower tolerance on very small functions, while the latter values the opposite due to their limited information volume capacity and high familiarity with very small and flexible functions. Thus the former will likely think of the latter as being too flexible towards cases that are almost impossible to happen under the current business requirements due to such codebases being generally harder for newcomers to fathom, while the latter will likely think of the former as being too friendly towards beginners at the expense of writing too rigid codes due to codebases being beginner friendly are usually those just thinking about the present needs. Summary It seems to me that many coding standard/style conflicts can be somehow explained by the conflicts between HID and LID, and those between HIV and LIV, especially when both sides are being more and more extreme. The combinations of these conflicts may be: Very HID/HIV + HID/LIV = Too Little Architecture vs Too Weak To Fathom Codes Very HID/HIV + LID/HIV = Being Way Too Complex vs Doing Too Little Things Very HID/HIV + LID/LIV = Over-Optimization Freak vs Over-Engineering Freak Very HID/LIV + LID/HIV = Too Concise/Organized vs Too Messy/Verbose Very HID/LIV + LID/LIV = Too Hard To Read At First vs Too Ineffective/Inefficient Very LID/HIV + LID/LIV = Too Beginner Friendly vs Too Flexible For Impossibles Conclusions Of course, one doesn't have to go for the HID, LID, HIV or LIV extremes, as there's quite some middle grounds to play with. In fact, I think the best of the best software engineers should deal with all these extremes well while still being able to play with the middle grounds well, provided that such an exceptional software engineer can even exist at all. Nevertheless, it's rather common to work with at least some of the software engineers falling into at least 1 extremes, so we should still know how to work well with them. After all, nowadays most of the real life business codebase are about teamwork but not lone wolves. By exploring the importance of information density, information volume and their relationships, I hope that this article can help us think of some aspects behind codebase readability and the nature of conflicts about it, and that we can be more able to deal with more different kinds of codebase and software engineers better. I think that it's more feasible for us to be able to read codebase with different information density and volume than asking others and the codebase to accommodate with our information density/volume limitations. Also, this article actually implies that readability's probably a complicated and convoluted concept, as it's partially objective at large(e.g.: the existence of consistent formatting and meaningful naming) and partially subjective at large(e.g.: the ability to handle different kinds of information density and volume for different software engineers). Maybe many avoidable conflicts involving readability stems from the tendency that many software engineers treat readability as easy, simple and small concept that are entirely objective.
  6. After waiting for several months to observe the results of vaccines, I finally decided to go for Comirnaty, because now my job needs me to either be vaccinated or take a regular testing every 2 weeks(240 HKD per test), and it seems to me that Comirnaty is safe enough in my case :)

    1. PhoenixSoul

      PhoenixSoul

      If you start feeling symptoms as seen previously, do not try to let them pass by.
      Intake vitamin C infused food and drink, and flush out the poison that remains. Enough have died after vaccinations, not you too, dammit.

    2. DoubleX

      DoubleX

      Maybe I'll die, maybe I won't, just let's see what will happen to me after several weeks :)

  7. Most seasoned professional software engineering teams probably understand the immense value of DVCS in their jobs, but it seems to me that the concepts of DVCS isn't used much outside of software engineering, even when DVCS has existed for way more than a decade already, which is quite a pity for me. So how DVCS can be used outside of software engineering? Let's show it using the following example: You've a front-line customer service job(sitting on a booth with the customer on the other side while you're using a computer to do the work) which demands you to strictly follow a SOP covering hundreds of cases(each of your cases will be checked by a different supervisor but no one knows who that supervisor will be beforehand), and the most severe SOP breach can cause you to end up going to jail(because of unintentionally violating serious legal regulations) You've to know what cases should be handled by yourselves and what have to be escalated to your supervisors(but no one knows which supervisor will handle your escalation beforehand), because escalating too many cases that could've been handled by yourselves will be treated as incompetent and get yourselves fired, while handling cases yourselves that should've been escalated is like asking to be fired immediately As the SOP is constantly revised by the upper management, it'll change quite a bit every several weeks on average, so the daily verbal briefing at the start of the working day is always exercised, to ensure all of you will have the updated SOP, as well as reminding what mistakes are made recently(but not mentioning who of course) Clearly, a SOP of this scale with this frequency and amount of changes won't be fully written in a black and white manner(it'd cost hundreds of A4 papers per copy), otherwise the company would've to hire staffs that are dedicated to keep the SOP up to date, in which the company will of course treat this as ineffective and inefficient(and wasting tons of papers), so the company expects EVERYONE(including the supervisors themselves) to ALWAYS have ABSOLUTELY accurate memory when working according to the SOP As newcomer joins, they've about 2 months to master the SOP, and senior staff of the same ranks will accompany these newcomers during this period, meaning that the seniors will verbally teach the newcomers the SOP, using the memory of the former and assuming that the latter will remember correctly Needless to say, the whole workflow is just asking for trouble, because: Obviously, no one can have absolutely accurate memory, especially when it's a SOP covering hundreds of cases, so it's just incredibly insane to assume that EVERYONE ALWAYS have ABSOLUTELY accurate memory on that, but that's what the whole workflow's based on As time passes, one's memory will start to become more and more inaccurate gradually(since human's memory isn't lossless), so eventually someone will make a mistake, and the briefing on the upcoming several days will try to correct that, meaning that the whole briefing thing is just an ad-hoc, rather than systematic, way to correct the staff's memories Similarly, as newcomers are taught by the seniors using the latter's memory, and human communications aren't lossless either, it's actually unreasonable to expect the newcomers to completely capture the SOP this way(because of the memory loss of the seniors, the information loss in the communication, and the memory loss of the newcomers, which is essentially the phenomenon revealed by Chinese whipsers), even when they've about 2 months to do so As each of your cases will be checked by a different supervisor and no one knows who that supervisor will be beforehand, and supervisors will also have memory losses(even though they'll usually deny that), eventually you'll have to face memory conflicts among supervisors, without those supervisors themselves even realizing that such conflicts among them do exist(the same problem will eventually manifest when you escalate cases to them, and this includes whether the cases should actually be escalated) Therefore, overtime, the memories on the SOP among the staff will become more and more different from each other gradually, eventually to the point that you won't know what to do as the memory conflicts among the supervisors become mutually exclusive at some parts of the SOP, meaning that you'll effectively have to gamble on which supervisor will handle your escalation and/or check your case, because there's no way you can know which supervisor will be beforehand Traditionally, the solution would be either enforcing the ridiculously wrong assumption that EVERYONE must ALWAYS have ABSOLUTELY accurate memory on a SOP worth hundreds of A4 papers even harder and more ruthlessly, or hiring staff dedicated to keep the written version of the SOP up to date, but even the written version will still have problems(albeit much smaller ones), because: As mentioned, while it does eliminate the issue of gradually increasing memory conflicts among staff overtime, having a written version per staff member would be far too ineffective and inefficient(not to mention that it's a serious waste of resources) When a written version of the SOP has hundreds of A4 papers and just a small parts of the SOP change, those staff dedicated to keep the SOP up to date will have to reprint the involved pages per copy and rearrange those copies before giving them back to the other staff, and possibly highlight the changed parts(and when they're changed) so the others won't have to reread the whole abomination again, and this will constantly put a very heavy burden on the former Because now the staff will rely on their own copies of the written version of the SOP, if there are difference among those written versions, the conflicts among the SOP implementations will still occur, even though now it'd be obvious that those staff dedicated to keep the SOP up to date will take the blame instead(but that'd mean they'll ALWAYS have to keep every copy up to date IMMEDIATELY, which is indeed an extremely harsh requirement for them) As it'd only be natural and beneficial for the staff to add their own notes onto their own copies of the written version of the SOP, when those written versions get updated, some of their notes there can be gone because those involved pages will be replaced, so now those staff might have to rewrite those notes, regardless of whether they've taken photos on those pages with their notes beforehand(but taking such photos would risk leaking the SOP), which still adds excessive burden on those staff As you're supposed to face customers at the other side of the booth while you're using a computer to do the work, it'd be detrimental on the customer service quality(and sometimes this can lead to the customer filing formal complaints, which are very major troubles) if you've to take out the written version of the SOP in front of the customer when you're not sure what to do in this case, even though it's still way, way better than screwing up the cases Combining all the above, that's where DVCS for the SOP can come into play: Because now the written version of the SOP is a soft copy instead(although it still works for soft copies without DVCS), this can be placed inside the system and the staff can just view it on the computer without much trouble, since the computer screen isn't facing the customer(and this largely mitigates the risk of having the staff leak out the written version of the SOP) Because the written version of the SOP's now in a DVCS, each staff will have its own branch or fork of the SOP, which can be used to drop their own private notes there as file changes(this assumes that the SOP is broken down into several or even dozens of files but this should be a given), and their notes can be easily added back to the updated versions of the files having those notes previously added, by simply viewing the diff of those files(or better yet, those notes can also be completely separate files, although it'd mean the staff have to know which note files corresponds to which SOP files, which can be solved by carefully naming all those files and/or using well-named folders) Because the written version of the SOP's now centralized in the system(the master branch), every staff should've the same latest version, thus virtually eliminating the problems caused by conflicts among different written versions from different staff members, and the need of the dedicated manual work to ensure they'll remain consistent Clearly, the extra cost induced from this DVCS application is its initial system setup and the introduction to newcomers of using DVCS at work, which are all one time costs instead of long-term ones, and compared to the troubles caused by other workflows, these one time costs are really trivial Leveraging the issues and pull requests features(but using blames as well might be just too much) in any decent DVCS, any staff can raise concerns on the SOP, and they'll either be solved, or at least the problems will become clear for everyone involved, so this should be more effective and efficient than just verbal reflections towards any particular colleagues and/or supervisors on difficulties faced(if called for, anonymous issues and pull requests can even be used, although it'd seem to be gone overboard) So the detailed implementation of the new workflow can be something like this: The briefing before starting the work of the day should still take place, as it can be used to emphasize the most important SOP changes and/or the recent mistakes made by colleagues(as blames not pointing to anyone specific) in the DVCS, so the staff don't have to check all the recent diffs themselves Whenever you're free, you can make use of the time to check the parts in the SOP of your concern from the computer in your booth, including parts being unclear to you, recent changes, and even submit an anonymous issue for difficulties you faced on trying to follow those parts of the SOP(or you can try to answer some issues in the DVCS made by the others as a means of helping them without having to leave your booth or explicitly voice out to avoid disturbing the others) When you're facing a customer right in front of you and you're unsure what to do next, you can simply ask the customer to wait for a while and check the involved parts of the SOP without the customer even noticing(you can even use issues to ask for help and hope there are colleagues that are free and will help you quickly), thus minimizing the damages caused to the customer service quality To prevent the DVCS from being abused by some staff members as a poor man's chat room at work, the supervisors can periodically check a small portions of the issues, blames and pull requests there as samples to see if they're just essentially conversations unrelated to work, and the feature of anonymity can be suspended for a while if those abusers abuse this as well(if they don't use anonymity when making those conversations, then the supervisors can apply disciplinary actions towards them directly), but don't always check all of them or those supervisors would be exhausted to death due to the potentially sheer number of such things Of course, you still have to try to master the SOP yourselves, as the presence of this DVCS, which is just meant to be an AUXILIARY of your memory, doesn't mean you don't have to remember anything, otherwise you'd end up constantly asking the customer to have unnecessary waits(to check the SOP) and asking colleagues redundant questions(even with minimal disruptions), causing you to become so ineffective and inefficient all the time that you'll still end up being fired in no time Of course, it's easier said than be done in the real world, because while setting up a DVCS and training new comers to use it are both easy, simple and small tasks, the real key that makes things complicated and convoluted is the willingness for the majority to adopt this totally new way of doing things, because it's such a grand paradigm shift that's wholeheartedly alien to most of those not being software engineers(when even quite some software engineers still reject DVCS in situations clearly needing it, just think about the resistance imposed by the outsiders). Also, there are places where DVCS just isn't suitable at all, like emergency units having to strictly follow SOPs, because the situations would be too urgent for them to check the SOP in DVCS even if they could use their mobile phones under such circumstances, and these are some cases where they do have to ALWAYS have ABSOLUTELY ACCURATE memories, as it's already the least evil we've known so far(bear in mind that they'd already have received extensive rigorous training for months or even years before being put into actions) Nevertheless, I still believe that, if some big companies having nothing to do with software engineering are brave enough to use some short-term projects as pilot schemes on using DVCS to manage their SOPs of their staffs, eventually more and more companies will realize the true value of this new ways of doing things, thus causing more and more companies to follow, eventually to the point that this becomes the norm across multiple industries, just like a clerk using MS Office in their daily works. To conclude, I think that DVCS can at least be applied to manage some SOPs of some businesses outside of software engineering, and maybe it can be used for many other aspects of those industries as well, it's just that SOP management is the one that I've personally felt the enormous pain of lacking DVCS when it's obviously needed the most.
  8. Just read some of my game codes written by myself 4 years ago, and now I don't understand them at all lol

    1. PhoenixSoul

      PhoenixSoul

      I can definitely relate to forgetting things like that.

    2. Kayzee

      Kayzee

      I don't find that to be much of a problem myself. :3

       

      Edit: Which is good given how much I slack on working on my game. Some parts of the code probably are 4+ years old!

  9. Note This plugin's available for commercial use Purpose Lets users set when/how ATB bars are shown on battler sprites Games using this plugin None so far Configurations Notetags Plugin Calls Video https://www.youtube.com/watch?v=xY_HrHi0e5M Prerequisites Plugins: 1. DoubleX RMMV Popularized ATB Core Abilities: 1. Little Javascript coding proficiency to fully utilize this plugin Terms Of Use You shall keep this plugin's Plugin Info part's contents intact You shalln't claim that this plugin's written by anyone other than DoubleX or his aliases None of the above applies to DoubleX or his/her aliases Changelog Download Link DoubleX RMMV Popularized ATB Bar
  10. Updates * v1.00d(GMT 0600 19-5-2021): * 1. Fixed all notetags not working bug
  11. Purpose Lets you sets some audios/images to be loaded upon game start This should boost the FPS on phones noticeably if there's enough memory Parameters Help Script Call Info Prerequisites Terms Of Use Contributors Changelog Download Link Pastebin doublex rmmv preloaded resources v100b.js
  12. Updates * v1.00b(GMT 0300 27-Mar-2020): * 1. You no longer have to edit the value of * DoubleX_RMMZ.Preloaded_Resources_File when changing the plugin file * name * 2. Fixed the crashes when preloading animations, images, etc, wthout * hues(such cases will be understood as having the default hue 0 only)
  13. Let's say that there's a reproducible fair test with the following specifications: The variable to be tested is A All the other variables as a set B is controlled to be K When A is set as X, the test result is P When A is set as Y, the test result is Q Then can you always safely claim that, X and Y must universally lead to P and Q respectively, and A is solely responsible for the difference between P and Q universally? If you think it's a definite yes, then you're probably oversimplifying control variables, because the real answer is this: When the control variables are set as K, then X and Y must lead to P and Q respectively. Let's show you an example using software engineering(Test 1) : Let's say that there's a reproducible fair test about the difference of impacts between procedural, object oriented and functional programming paradigms on the performance of the software engineering teams, with the other variables, like project requirements, available budgets, software engineer competence and experience, software engineering team synergy, etc, controlled to be the same specified constants, and the performance measured as the amount and the importance of the conditions and constraints fulfilled in the project requirements, the budget spent(mainly time), the amount and the severity of unfixed bugs, etc. The result is that, procedural programming always performs the best in all the project requirement fulfillment, budget consumption, with the least amount of bugs, all being the least severe, and the result is reproducible, and this result seems to be scientific right? So can we safely claim that procedural programming always universally performs the best in all those regards? Of course it's absurd to the extreme, but those experiments are indeed reproducible fair tests, so what's really going on? The answer is simple - The project requirements are always(knowingly or unknowingly) controlled to be those inherently suited for procedural programming, like writing the front end of an easy, simple and small website just for the clients to conveniently fill in some basic forms online(like back when way before things like Google form became a real thing), and the project has to be finished within a very tight time scope. In this case, it's obvious that both object oriented and functional programming would be overkill, because the complexity is tiny enough to be handled by procedural programming directly, and the benefits of both of the former need time to materialize, whereas the tight time scope of the project means that such up front investments are probably not worth it. If the project's changed to write a 3A game, or a complicated and convoluted full stack cashier and inventory management software for supermarkets, then I'm quite sure that procedural programming won't perform the best, because procedural programming just isn't suitable for writing such software(actually, in reality, the vast majority of practical projects should be solved using the optimal mix of different paradigms, but that's beyond the scope of this example). This example aims to show that, even a reproducible fair test isn't always accurate when it comes to drawing universal conclusions, because the contexts of that test, which are the control variables, also influence the end results, so the contexts should always be clearly stated when drawing the conclusions, to ensure that those conclusions won't be applied to situations where those conclusions no longer hold. Another example can be a reproducible fair test examining whether proper up front architectural designs(but that doesn't mean it must be waterfall) are more productive than counterproductive, or visa versa(Test 2) : If the test results are that it's more productive than counterproductive, then it still doesn't mean that it's universally applicable, because those project requirements as parts of the control variables can be well-established and being well-known problems with well-known solutions, and there has never been abrupt nor absurd changes to the specifications. Similarly, if the test results are that it's more counterproductive than productive, then it still doesn't mean that it's universally applicable, because those project requirements as parts of the control variables can be highly experimental, incomplete and unclear in nature, meaning that the software engineering team must first quickly explore some possible directions towards the final solution, and perhaps each direction demands a PoC or even a MVP to be properly evaluated, so proper architectural designs can only be gradually emerged and refined in such cases, especially when the project requirements are constantly adjusted drastically. If an universally applicable conclusion has to be reached, then one way to solve this is to make even more fair tests, but with the control variables set to be different constants, and/or with different variables to be tested, to avoid conclusions that actually just apply to some unstated contexts. For instance, in Test 2, the project nature as the major part of the control variables can be changed, then one can check if the following new reproducible fair tests testing the productivity of proper up front architectural designs will have changed results; Or in Test 1, the programming paradigm to be used can become a part of the control variables, whereas the project nature can become the variable to be tested in the following new reproducible fair tests. Of course, that'd mean a hell lot of reproducible fair tests to be done(and all those results must be properly integrated, which is itself a very complicated and convoluted matter), and the difficulties and costs involved likely make the whole thing too infeasible to be done within a realistic budget in the foreseeable future, but it's still better than making some incomplete tests and falsely draw universal conclusions from them, when those conclusions can only be applied to some contexts(and those contexts should be clearly stated). Therefore, to be practical while still respectful to the truth, until the software engineering industry can finally perform complete tests that can reliably draw actually universal conclusions, it's better for the practitioners to accept that many of the conclusions there are still just contextual, and it's vital for us to carefully and thoroughly examine our circumstances before applying those situational test results. For example, JavaScript(and sometimes even TypeScript), is said to suck very hard, partly because there are simply too many insane quirks, and writing JavaScript is like driving without any traffic rules at all, so it's only natural that we should avoid JavaScript as much as we can right? However, to a highly devoted, diligent and disciplined JavaScript programmer, JavaScript is one of the few languages that provide the amount of control and freedom that are simply unthinkable in many other programming languages, and such programmers can use them extremely effectively and efficiently, all without causing too much technical debts that can't be repaid on time(of course, it's only possible when such programmers are very experienced in JavaScript and care a great deal about code qualities and architectural designs). The difference here is again the underlying context, because those blaming JavaScript might be usually working on large projects(like those way beyond the 10M LoC scale) with large teams(like way beyond 50 members), and it'd be rather hard to have a team with all members being highly devoted, diligent and disciplined, so the amount of control and freedom offered by JavaScript will most likely lead to chaos; Whereas those praising JavaScript might be usually working alone or with a small team(like way less than 10 members) on small projects(like those way less than the 100k LoC scale), and the strict rules imposed by many statically strong typed languages(especially Java with checked exceptions) may just be getting in their way, because those restrictions lead to up front investments, which need time and project scale to manifest their returns, and such time and project scale are usually lacking in small projects worked by small teams, where short-term effectiveness and efficiency is generally more important. Do note that these opinions, when combined, can also be regarded as reproducible fair tests, because the amount of coherent and consistent opinions on each side is huge, and many of them won't have the same complaint or compliment when only the languages are changed. Therefore, it's normally pointless to totally agree or disagree on a so-called universal conclusion about some aspects on software engineering, and what's truly meaningful instead is to try to figure out the contexts behind those conclusions, assuming that they're not already stated clearly, so we can better know when to apply those conclusions and when to apply some others. Actually, similar phenomenons exist outside of software engineering. For instance, let's say there's a test on the relations between the number of observers of a knowingly immoral wrongdoing, and the percentage of them going to help the victims and stop the culprits, with the entire scenes under the watch of surveillance cameras, so those recordings are sampled in large amounts to form reproducible fair tests. Now, some researchers claim that the results from those samplings are that, the more the observers are out there, the higher the percentage of them going to help the victims and stop the culprits, so can we safely conclude that the bystander effect is actually wrong? It at least depends on whether those bystanders knew that those surveillance cameras did exist, because if they did know, then it's possible that those results are affected by hawthorne effect, meaning that the percentage of them going to help the victims and stop the culprits could be much, much lower if there were no surveillance cameras, or they didn't know those surveillance cameras did exist(but that still doesn't mean the bystander effect is right, because the truth could be that the percentage of bystanders going to help the victims has little to do with the number of bystanders). In this case, the existence of those surveillance cameras is actually a major part of the control variables in those reproducible fair tests, and this can be regarded as an example of the observer's paradox(whether this can justify the more and more numbers of surveillance cameras everywhere are beyond the scope of this article). Of course, this can be rectified, like trying to conceal those surveillance cameras, or finding some highly trained researchers to regularly record places that are likely to have culprits openly hurting victims with a varying number of observers, without those observers knowing the existence of those researchers, but needless to say, these alternatives are just so unpragmatic that no one will really do it, and they'll also pose even greater problems, like serious privacy issues, even if they could be actually implemented. Another example is that, when I was still a child, I volunteered into a research of the sleep quality of children in my city, and I was asked to sleep in a research center, meaning that my sleeping behaviors will be monitored. I can still vaguely recall that I ended up sleeping quite poorly at that night, despite the fact that both the facilities(especially the bed and the room) and the personnel there are really nice, while I sleep well most of the time back when I was a child, so such a seemingly strange result was probably because I failed to quickly adapt to a vastly different sleeping environment, regardless of how good that bed in that research center was. While I can vaguely recall that the full results of the entire study of all children volunteered was far from ideal, the changes of the sleeping environment still played as a main part of the control variables in those reproducible fair tests, so I still wonder whether the sleep qualities the children in my city back then were really that subpar. To mitigate this, those children could have been slept in the research center of many, instead of just 1, nights, in order to eliminate the factor of having to adapt to a new sleeping environment, but of course the cost of such researches to both the researchers and the volunteers(as well as their families) would be prohibitive, and the sleep quality results still might not hold when those child go back to their original sleeping environment. Another way might be to let parents buy some instruments, with some training, to monitor the sleep qualities of their children in their original sleeping environment, but again, the feasibility of such researches and the willingness of the parents to carry them out would be really great issues. The last example is the famous Milgram experiment, does it really mean most people are so submissive to their perceived authorities when it comes to immoral wrongdoings? There are some problems to be asked, at least including the following: Did they really think the researchers would just let those victims die or have irreversible injuries due to electric shocks? After all, such experiments would likely be highly illegal, or at least highly prone to severe civil claims, meaning that it's only natural for those being researched to doubt the true nature of the experiment. Did those fake electric shocks and fake victims act convincing enough to make the experiment look real? If those being researched figured out that those are just fakes, then the meaning of the whole experiment would be completed changed. Did those being researched(the "teachers") really don't know they're actually the ones being researched? Because if those "students" were really the ones being researched, why would the researchers need extra participants to carry out the experiments(meaning that the participants would wonder the necessity of some of them being "teachers", and why not just make them all "students" instead)? Assuming that the whole "teachers" and "students" things, as well as the electric shocks are real, did those "students" sign some kind of private but legally valid consents proving that they knew they were going to receive real electric shocks when giving wrong answers, and they were willing to face them for the research? If those "teachers" had reasons to believe that this were the case, their behaviors would be really different from those in their real lives. In this case, the majority of the control variables in those reproducible fair tests are the test setups themselves, because such experiments would be immoral to the extreme if those being researched truly did immoral wrongdoings, meaning that it'd be inherently hard to properly establish a concrete and strong causation between immoral wrongdoings and some other fixed factors, like the submissions to the authorities. Some may say that those being researched did believe that they were performing immoral wrongdoings because of their reactions during the test and the interview afterwards, and those reactions will also manifest when someone does do some knowingly immoral wrongdoings, so the Milgram experiment, which is already reproduced, still largely holds. But let's consider this thought experiment - You're asked to play an extremely gore, sadistic and violent VR game with the state of the art audios, immersions and visuals, with some authorities ordering you to kill the most innocent characters with the most brutal means possible in that game, and I'm quite certain that many of you would have many of the reactions manifested by those being researched in the Milgram experiment, but that doesn't mean many of you will knowingly perform immoral wrongdoings when being submissive to the authority, because no matter how realistic those actions seem to be, it's still just a game after all. The same might hold for Milgram experiment as well, where those being researched did know that the whole thing's just a great fake on one hand, but still manifested reactions that are the same as someone knowingly doing some immoral wrongdoings on the other, because the fake felt so real that their brains got cheated and showed some real emotions to some extent despite them knowing that it's still just a fake after all, just like real immense emotions being evoked when watching some immensely emotional movies. It doesn't mean the Milgram experiment is pointless though, because it at least proves that being submissive to the perceived or real authorities will make many people do many actions that the latter wouldn't normally do otherwise, but whether such actions include knowingly immoral wrongdoings might remain inconclusive from the results of that experiment(even if authorities do cause someone to do immoral wrongdoings that won't be done otherwise, it could still be because that someone really doesn't know that they're immoral wrongdoings due to the key information being obscured by the authorities, rather than being submissive to those authorities even though that someone knows that they're immoral wrongdoings). Therefore, to properly establish a concrete and strong causation between knowingly immoral wrongdoings and submissions to the perceived or real authorities, we might have to investigate actual immoral wrongdoings in real life, and what parts of the perceived or real authorities were playing in those incidents. To conclude, those making reproducible fair tests should clearly state their underlying control variables when drawing conclusions when feasible, and those trying to apply those conclusions should be clear on their circumstances to determine whether those conclusions do apply under those situations they're facing, as long as the time needed for such assessments are still practical enough in those cases.
  14. Note This plugin's available for commercial use Purpose Lets you bind hotkeys to skills for actors outside battles, and use them to select usable skills for actors inside battles Compatibility Fix DoubleX RMMV Skill Hotkeys Compatibility Introduction Videos DoubleX RMMV Skill Hotkeys Games using this plugin None so far Parameters Notetags Plugin Calls Plugin Commands Configurations Author Notes Instructions Prerequisites Terms Of Use Authors DoubleX Changelog Download Links DoubleX RMMV Skill Hotkeys DoubleX RMMV Skill Hotkeys Unit Test DoubleX RMMV Skill Hotkeys v101b.js DoubleX RMMV Skill Hotkeys Unit Test v100a.js
  15. DoubleX

    DoubleX RMMV Skill Hotkeys

    Updates * v1.01b(GMT 0400 13-2-2021): * 1. Fixed the bug of being able to select unusable hotkey skills in the * skill window in battles
  16. Purpose Lets you set some skills/items to have battler and skill/item cooldowns Introduction * 1. This plugins lets you set 2 kinds of skill/item cooldowns: * - Skill/Item cooldown - The number of turns(battle turn in turn based * individual turn in TPBS) needed for the * skill/item to cooldown before it becomes * usable again * - Battler cooldown - The number of turns(battle turn in turn based * individual turn in TPBS) needed for the battler * just executed the skill/item to cooldown before * that battler can input actions again * 2. If the skill/item cooldown is 1 turn, it means battlers with multiple * action slots can only input that skill/item once instead of as many * as the action slots allow * If the battler cooldown is negative, it means the TPB bar charging * value will be positive instead of 0 right after executing the * skill/item(So a -1 battler cooldown means the battler will become * able to input actions again right after executing such skills/items) * 3. When updating the battler individual turn count in TPBS, the decimal * parts of the battler will be discarded, but those parts will still be * used when actually increasing the time needed for that battler to * become able to input actions again * In the turn based battle system, the decimal parts of the battler * cooldown counts as 1 turn * The decimal parts of the final skill/item cooldown value will be * discarded * 4. Skill/item cooldown can be set to apply outside battles as well * Skill/item cooldown won't be updated when the battler has fully * charged the TPBS bar Video Video(v1.02a+) Games using this plugin None so far Parameters Notetags Script Calls Plugin Commands Plugin Query Info Prerequisites Terms Of Use Contributors Changelog Download Link Demo Link
  17. Updates * { codebase: "1.1.1", plugin: "v1.02a" }(2021 Feb 7 GMT 1300): * 1. Added skillItemCooldownGaugeColor1 and skillItemCooldownGaugeColor2 * to let you show the TPB battler cooldown bar inside battles with * configurable colors * 2. Added cancelBattlerCooldownHotkeys and * cancelSkillItemCooldownHotkeys to let you set some hotkeys to * cancel the battler/skill item cooldown of the corresponding actors * respectively * 3. Added the following parameters: * - canCancelBattlerCooldown * - canCancelSkillItemCooldown * - cancelBattlerCooldownFail * - cancelSkillItemCooldownFail * - cancelBattlerCooldownSuc * - cancelSkillItemCooldownSuc * - canCancelBattlerCooldownNotetagDataTypePriorities * - canCancelSkillItemCooldownNotetagDataTypePriorities * - cancelBattlerCooldownFailNotetagDataTypePriorities * - cancelSkillItemCooldownFailNotetagDataTypePriorities * - cancelBattlerCooldownSucNotetagDataTypePriorities * - cancelSkillItemCooldownSucNotetagDataTypePriorities * 4. Added the following plugin commands: * - canCancelBattlerCooldown * - canCancelSkillItemCooldown * - cancelBattlerCooldown * - cancelSkillItemCooldown * 5. Added the following notetags: * - canCancelBattler * - canCancelSkillItem * - cancelBattlerFail * - cancelSkillItemFail * - cancelBattlerSuc * - cancelSkillItemSuc Video
  18. I just received an email like this: Title: Notification Case #(Some random numbers) Sender: (Non-Paypal logo)service@paypal.com.(My PayPal account location) <(Non-PayPal email used by the real scammers)> Recipients: (My email), (The email of an innocent straw man used by the real scammers) Contents(With UI styles copying those in real PayPal emails) : Someone has logged into your account We noticed a new login with your PayPal account associated with (The email of an innocent straw man used by the real scammers) from a device we don't recognize. Because of that we've temporarily limited your account until you renew and verify your identity. Please click the button below to login into your account for verify your account. (Login button copying that in real Paypal emails) If this was you, please disregard this email. (Footers copying those in real PayPal emails) I admit that I'm incredibly stupid, because I almost believed that it's a real PayPal email, and I only realized that it's a scam right after I've clicked the login button, because it links to a URL that's completely different from the login page of the real PayPal(so fortunately I didn't input anything there). While I've faced many old-schooled phishing emails and can figure them all out right from the start, I've never seen phishing emails like this, and what makes me feel even more dumb is that I already have 2FA applied to my PayPal account before receiving this scam email, meaning that my phone would've a PayPal verification SMS out of nowhere if there was really an unauthorized login to my account. Of course, that straw man email owner is completely innocent, and I believe that owner already received the same scam email with me being the straw man, so that owner might think that I really performed unauthorized login into his/her PayPal account, if he/she didn't realize that the whole email's just a scam. Before I realized that it's just a scam, I thought he/she really done what the email claims as well, so I just focused on logging into my PayPal accounts to assess the damages done and evaluate countermeasures to be taken, and if I didn't realize that it's just a scam, I'd already have given the password of my PayPal account to the scammers in their fake PayPal login page. I suspect that many more PayPal users might have already received/are going to receive such scam emails, and I think this way of phishing can work for many other online payment gateways as well, so I think I can do some good by sharing my case, to hope that only I'll be this dumb(even though I didn't give the scammers my Paypal password at the end).
  19. The complete microsoft word file can be downloaded here(as a raw file) Summary The whole password setup/change process is as follows: 1. The client inputs the user ID and its password in plaintext 2. A salt for hashing the password in plaintexts will be randomly generated 3. The password will be combined with a fixed pepper in the client software source code and the aforementioned salt, to be hashed in the client terminal by SHA3-512 afterwards 4. The hashed password as a hexadecimal number with 128 digits will be converted to a base 256 number with 64 digits, which will be repeated 8 times in a special manner, and then broken down into a list of 512 literals, each being either numeric literals 1 to 100 or any of the 156 named constants 5. Each of those 512 numeric literals or named constants will be attached with existing numeric literals and named constants via different ways and combinations of additions, subtractions, multiplications and divisions, and the whole attachment process is determined by the fixed pepper in the client software source code 6. The same attachment process will be repeated, except that this time it’s determined by a randomly generated salt in the client terminal 7. That list of 512 distinct roots, with the ordering among all roots and all their literal expressions preserved, will produce the resultant polynomial equation of degree 512 8. The resultant polynomial equation will be encoded into numbers and number separators in the client terminal 9. The encoded version will be encrypted by RSA-4096 on the client terminal with a public key there before being sent to the server, which has the private key 10. The server decrypts the encrypted polynomial equation from the client with its RSA-4096 private key, then decode the decrypted version in the server to recover the original polynomial equation, which will finally be stored there 11. The 2 aforementioned different salts will be encrypted by 2 different AES-256 keys in the client software source code, and their encrypted versions will be sent to the server to be stored there 12. The time complexity of the whole process, except the SHA3-512, RSA-4096 and AES-256, should be controlled to quadratic time The whole login process is as follows: 1. The client inputs the user ID and its password in plaintext 2. The client terminal will send the user ID to the server, which will send its corresponding salts for hashing the password in plaintexts and forming distinct roots respectively, already encrypted in AES-256 back to the client terminal, assuming that the user ID from the client does exist in the server(otherwise the login fails and nothing will be sent back from the server) 3. The password will be combined with a fixed pepper in the client software source code, and the aforementioned salt that is decrypted in the client terminal using the AES-256 key in the client software source code, to be hashed in the client terminal by SHA3-512 afterwards 4. The hashed password as a hexadecimal number with 128 digits will be converted to a base 256 number with 64 digits, which will be repeated 8 times in a special manner, and then broken down into a list of 512 literals, each being either numeric literals 1 to 100 or any of the 156 named constants 5. Each of those 512 numeric literals or named constants will be attached with existing numeric literals and named constants via different ways and combinations of additions, subtractions, multiplications and divisions, and the whole attachment process is determined by the fixed pepper in the client software source code 6. The same attachment process will be repeated, except that this time it’s determined by the corresponding salt sent from the server that is decrypted in the client terminal using a different AES-256 key in the client software source code 7. That list of 512 distinct roots, with the ordering among all roots and all their literal expressions preserved, will produce the resultant polynomial equation of degree 512 8. The resultant polynomial equation will be encoded into numbers and number separators in the client terminal 9. The encoded version will be encrypted by RSA-4096 on the client terminal with a public key there before being sent to the server, which has the private key 10. The server decrypts the encrypted polynomial equation from the client with its RSA-4096 private key, then decode the decrypted version in the server to recover the original polynomial equation 11. Whether the login will succeed depends on if the literal expression of the polynomial equation from the client exactly matches the expected counterpart already stored in the server 12. The time complexity of the whole process, except the SHA3-512, RSA-4096 and AES-256, should be controlled to quadratic time For an attacker trying to get the raw password in plaintext: 1. If the attacker can only sniff the transmission from the client to the server to get the encoded then encrypted version(which is then encrypted by RSA-4096) of the polynomial equation, the salt of its roots, and the counterpart for the password in plaintext, the attacker first have to break RSA-4096, then the attacker has to figure out the highly secret and obfuscated algorithm to decode those numbers and number separators into the resultant polynomial equation and the way its roots are attached by existing numeric literals and named constants 2. If the attacker has the resultant polynomial equation of degree 512, its roots must be found, but there’s no direct formula to do so analytically due to Abel-Ruffini theorem, and factoring such a polynomial with 156 different named constants efficiently is very, very complicated and convoluted 3. If the attacker has direct access to the server, the expected polynomial equation can be retrieved, but the attacker still has to solve that polynomial equation of degree 512 to find all its roots with the right ordering among them and all their correct literal expressions 4. If the attacker has direct access to the client software source codes, the pepper for hashing the password in plaintext, the pepper used on the polynomial equation roots, and the highly secret and obfuscated algorithm for using them with the salt counterparts can be retrieved, but it’s still far from being able to find all the roots of the expected polynomial equation of degree 512 5. If the attacker has all those roots, the right ordering among them and all their correct literal expressions still have to be figured out, and the salts and peppers for those roots has to be properly removed as well 6. If the attacker has all those roots with the right ordering among them, all their correct literal expressions, and salts and peppers on them removed, the attacker has effectively recovered the hashed password, which is mixed with salts and peppers in plaintext 7. The attacker then has to figure out the password in plaintext even with the hashing function, salt, pepper, and the highly secret and obfuscated algorithm that combines them known 8. Unless there are really efficient algorithms for every step involved, the time complexity of the whole process can be as high as factorial time 9. As users are still inputting passwords in plaintexts, dictionary attacks still work to some extent, but if the users are careless with their password strengths, then no amount of cryptography will be safe enough 10. Using numerical methods to find all the roots won’t work in most cases, because such methods are unlikely to find those roots analytically, let alone with the right ordering among them and all their right literal expressions, which are needed to produce the resultant polynomial equation with literal expressions exactly matching the expected one 11. Using rainbow tables won’t work well either, because such table would be way too large to be used in practice, due to the number of polynomial equations with degree 512 being unlimited in theory 12. Strictly speaking, the whole password encryption scheme isn’t a one-way function, but the time complexity needed for encryption compared to that for decryption is so trivial that this scheme can act like such a function Areas demanding further researches: 1. The time complexity for factoring a polynomial of degree n with named constants into n factors analytically 2. Possibilities of collisions from the ordering among all roots and all their different literal expressions 3. Existence of efficient algorithms on finding the right ordering among all roots and all their right literal expressions 4. Strategies on setting up the fixed peppers and generating random salts to form roots with maximum encryption strength Essentially, the whole approach on using polynomial equations for encryptions is to exploit equations that are easily formed by their analytical solution sets but very hard to solve analytically, especially when exact literal matches, rather than just mathematical identity, are needed to match the expected equations. So it’s not strictly restricted to polynomial equations with a very high degree, but maybe very high order partial differential equations with many variables, complex coefficients and functions accepting complex numbers can also work, because there are no known analytical algorithm on solving such equations yet, but analytical solutions are demanded to reproduce the same partial differential equations with exact literal matches, as long as performing partial differentiations analytically can be efficient enough.
  20. Purpose Lets you run some codes set by your notetags on some important state timings Introduction * 1. This plugin lets you use notetags to set what happens when an * action's just executed, and different cases like miss, evade, counter * attack, magic reflection, critical hit, normal execution, substitute, * right before starting to execute actions, and right after finished * executing the actions, can have different notetags * 2. You're expected to write JavaScript codes directly, as there are so * much possibilities that most of them are just impossible to be * covered by this plugin itself, so this plugin just lets you write * JavaScript codes that are executed on some important timings Video Video(v1.01a+) Games using this plugin None so far Parameters Notetags Script Calls Plugin Commands Prerequisites Plugins: 1. DoubleX RMMZ Enhanced Codebase Abilities: 1. Some RMMV plugin development proficiency (Basic knowledge on what RMMV plugin development does in general with several easy, simple and small plugins written without nontrivial bugs up to 1000 LoC scale but still being inexperienced) Terms Of Use Contributors Changelog Download Link Demo Link
  21. Updates * { codebase: "1.1.1", plugin: "v1.01a" }(2020 Dec 26 GMT 1300): * 1. Added the following notetag types: * subjectMiss * subjectEva * subjectCnt * subjectMrf * subjectCri * subjectNorm * subjectSubstitute * 2. Added the following parameters: * subjectMissNotetagDataTypePriorities * subjectEvaNotetagDataTypePriorities * subjectCntNotetagDataTypePriorities * subjectMrfNotetagDataTypePriorities * subjectCriNotetagDataTypePriorities * subjectNormNotetagDataTypePriorities * subjectSubstituteNotetagDataTypePriorities * 3. Fixed the eventEntry of all notetags not correctly accepting all * intended suffixes and rejecting the unintended ones Video The latest version of DoubleX RMMZ Enhanced Codebase is needed as well :)
  22. Purpose Lets you change some effectively hardcoded TPBS configurations on the fly Introduction * 1. By default, many TPBS configurations are effectively hardcoded, but * many users will want to change many of them to suit their needs * 2. This plugin lets you do so, although you might want to write some * JavaScript codes directly, as there are just too many possibilities * to be handled Video Video(v1.01a+) Games using this plugin None so far Parameters Notetags Script Calls Plugin Commands Todo Prerequisites Terms Of Use Contributors Changelog Download Link Demo Link
  23. Updates * { codebase: "1.1.1", plugin: "v1.01a" }(2020 Dec 25 GMT 1100): * 1. Added tpbChargeGaugeColor1 and tpbChargeGaugeColor2 to let you * configure the TPB charging bar colors inside battles * 2. Added tpbIdleGaugeColor1 and tpbIdleGaugeColor2 to let you show the * TPB idling bar inside battles with configurable colors * 3. Added tpbCastGaugeColor1 and tpbCastGaugeColor2 to let you show the * TPB casting bar inside battles with configurable colors * 4. Added tpbReadyGaugeColor1 and tpbCastGaugeColor2 to let you show * the TPB cast ready bar inside battles with configurable colors * 5. Added isTpbTimeActive to let you set the TPBS wait conditions more * precisely Video
  24. Note While this plugin's already fully functional, there are still many more modules to be implemented, so the feature set isn't complete yet. Purpose To be the most flexible, performant and powerful ATB system framework with the greatest amount of freedom while being user-friendly Introduction * 1. This plugin aims to be the most flexible, performant and powerful * ATB system with the greatest amount of freedom for users to fulfill * as many functional needs as they want in as many ways as they want * 2. You may want to treat this as a nano ATB framework as part of the * system's written by you via parameters/configurations/notetags/calls * 3. Almost every parameters and notetags can be written as direct * JavaScript, thus giving you the maximum amount of control over them * 4. (VERY ADVANCED)You can even change most of those JavaScript codes * written by you on the fly(and let your players do so with a system * settings plugin), but you should only do so if you really know what * you're truly doing Video Games using this plugin None so far Finished Modules Addressed Foreign Plugins Upcoming Modules Possibly Upcoming Modules Todo Inherited Behaviors From The Default RMMV Battle System Current Technical Limitations Author Notes FAQ Prerequisites Terms Of Use Instructions Contributors Changelog Demo
×
Top ArrowTop Arrow Highlighted