Jump to content

DoubleX

Member
  • Content Count

    1,002
  • Joined

  • Last visited

  • Days Won

    9

DoubleX last won the day on September 26 2020

DoubleX had the most liked content!

Community Reputation

207

2 Followers

About DoubleX

  • Rank
    Just a nameless weakling
  • Birthday 06/14/1991

Profile Information

  • Gender
    Male

Recent Profile Visitors

14,662 profile views
  1. Updates # v0.05c(GMT 0600 28-11-2021): | # 1. Fixed wrong eval of ecatb_battler_scale and uninitialized battler turn | # bugs |
  2. It seems to me that that Yanfly plugin will just mirror everything that's attached to that sprite, as long as that sprite itself's to be mirrored, meaning that the same problem would occur if another plugin attach a HP/MP/TP bar to it. Regardless, I'll still look into this issue and find a way to fix it
  3. Updates * v1.03f(GMT 0700 23-6-2021): * 1. Fixed the visuals of the action sequences of actor sprites being * reset when other actors are inputable bug
  4. After waiting for several months to observe the results of vaccines, I finally decided to go for Comirnaty, because now my job needs me to either be vaccinated or take a regular testing every 2 weeks(240 HKD per test), and it seems to me that Comirnaty is safe enough in my case :)

    1. PhoenixSoul

      PhoenixSoul

      If you start feeling symptoms as seen previously, do not try to let them pass by.
      Intake vitamin C infused food and drink, and flush out the poison that remains. Enough have died after vaccinations, not you too, dammit.

    2. DoubleX

      DoubleX

      Maybe I'll die, maybe I won't, just let's see what will happen to me after several weeks :)

  5. Most seasoned professional software engineering teams probably understand the immense value of DVCS in their jobs, but it seems to me that the concepts of DVCS isn't used much outside of software engineering, even when DVCS has existed for way more than a decade already, which is quite a pity for me. So how DVCS can be used outside of software engineering? Let's show it using the following example: You've a front-line customer service job(sitting on a booth with the customer on the other side while you're using a computer to do the work) which demands you to strictly follow a SOP covering hundreds of cases(each of your cases will be checked by a different supervisor but no one knows who that supervisor will be beforehand), and the most severe SOP breach can cause you to end up going to jail(because of unintentionally violating serious legal regulations) You've to know what cases should be handled by yourselves and what have to be escalated to your supervisors(but no one knows which supervisor will handle your escalation beforehand), because escalating too many cases that could've been handled by yourselves will be treated as incompetent and get yourselves fired, while handling cases yourselves that should've been escalated is like asking to be fired immediately As the SOP is constantly revised by the upper management, it'll change quite a bit every several weeks on average, so the daily verbal briefing at the start of the working day is always exercised, to ensure all of you will have the updated SOP, as well as reminding what mistakes are made recently(but not mentioning who of course) Clearly, a SOP of this scale with this frequency and amount of changes won't be fully written in a black and white manner(it'd cost hundreds of A4 papers per copy), otherwise the company would've to hire staffs that are dedicated to keep the SOP up to date, in which the company will of course treat this as ineffective and inefficient(and wasting tons of papers), so the company expects EVERYONE(including the supervisors themselves) to ALWAYS have ABSOLUTELY accurate memory when working according to the SOP As newcomer joins, they've about 2 months to master the SOP, and senior staff of the same ranks will accompany these newcomers during this period, meaning that the seniors will verbally teach the newcomers the SOP, using the memory of the former and assuming that the latter will remember correctly Needless to say, the whole workflow is just asking for trouble, because: Obviously, no one can have absolutely accurate memory, especially when it's a SOP covering hundreds of cases, so it's just incredibly insane to assume that EVERYONE ALWAYS have ABSOLUTELY accurate memory on that, but that's what the whole workflow's based on As time passes, one's memory will start to become more and more inaccurate gradually(since human's memory isn't lossless), so eventually someone will make a mistake, and the briefing on the upcoming several days will try to correct that, meaning that the whole briefing thing is just an ad-hoc, rather than systematic, way to correct the staff's memories Similarly, as newcomers are taught by the seniors using the latter's memory, and human communications aren't lossless either, it's actually unreasonable to expect the newcomers to completely capture the SOP this way(because of the memory loss of the seniors, the information loss in the communication, and the memory loss of the newcomers, which is essentially the phenomenon revealed by Chinese whipsers), even when they've about 2 months to do so As each of your cases will be checked by a different supervisor and no one knows who that supervisor will be beforehand, and supervisors will also have memory losses(even though they'll usually deny that), eventually you'll have to face memory conflicts among supervisors, without those supervisors themselves even realizing that such conflicts among them do exist(the same problem will eventually manifest when you escalate cases to them, and this includes whether the cases should actually be escalated) Therefore, overtime, the memories on the SOP among the staff will become more and more different from each other gradually, eventually to the point that you won't know what to do as the memory conflicts among the supervisors become mutually exclusive at some parts of the SOP, meaning that you'll effectively have to gamble on which supervisor will handle your escalation and/or check your case, because there's no way you can know which supervisor will be beforehand Traditionally, the solution would be either enforcing the ridiculously wrong assumption that EVERYONE must ALWAYS have ABSOLUTELY accurate memory on a SOP worth hundreds of A4 papers even harder and more ruthlessly, or hiring staff dedicated to keep the written version of the SOP up to date, but even the written version will still have problems(albeit much smaller ones), because: As mentioned, while it does eliminate the issue of gradually increasing memory conflicts among staff overtime, having a written version per staff member would be far too ineffective and inefficient(not to mention that it's a serious waste of resources) When a written version of the SOP has hundreds of A4 papers and just a small parts of the SOP change, those staff dedicated to keep the SOP up to date will have to reprint the involved pages per copy and rearrange those copies before giving them back to the other staff, and possibly highlight the changed parts(and when they're changed) so the others won't have to reread the whole abomination again, and this will constantly put a very heavy burden on the former Because now the staff will rely on their own copies of the written version of the SOP, if there are difference among those written versions, the conflicts among the SOP implementations will still occur, even though now it'd be obvious that those staff dedicated to keep the SOP up to date will take the blame instead(but that'd mean they'll ALWAYS have to keep every copy up to date IMMEDIATELY, which is indeed an extremely harsh requirement for them) As it'd only be natural and beneficial for the staff to add their own notes onto their own copies of the written version of the SOP, when those written versions get updated, some of their notes there can be gone because those involved pages will be replaced, so now those staff might have to rewrite those notes, regardless of whether they've taken photos on those pages with their notes beforehand(but taking such photos would risk leaking the SOP), which still adds excessive burden on those staff As you're supposed to face customers at the other side of the booth while you're using a computer to do the work, it'd be detrimental on the customer service quality(and sometimes this can lead to the customer filing formal complaints, which are very major troubles) if you've to take out the written version of the SOP in front of the customer when you're not sure what to do in this case, even though it's still way, way better than screwing up the cases Combining all the above, that's where DVCS for the SOP can come into play: Because now the written version of the SOP is a soft copy instead(although it still works for soft copies without DVCS), this can be placed inside the system and the staff can just view it on the computer without much trouble, since the computer screen isn't facing the customer(and this largely mitigates the risk of having the staff leak out the written version of the SOP) Because the written version of the SOP's now in a DVCS, each staff will have its own branch or fork of the SOP, which can be used to drop their own private notes there as file changes(this assumes that the SOP is broken down into several or even dozens of files but this should be a given), and their notes can be easily added back to the updated versions of the files having those notes previously added, by simply viewing the diff of those files(or better yet, those notes can also be completely separate files, although it'd mean the staff have to know which note files corresponds to which SOP files, which can be solved by carefully naming all those files and/or using well-named folders) Because the written version of the SOP's now centralized in the system(the master branch), every staff should've the same latest version, thus virtually eliminating the problems caused by conflicts among different written versions from different staff members, and the need of the dedicated manual work to ensure they'll remain consistent Clearly, the extra cost induced from this DVCS application is its initial system setup and the introduction to newcomers of using DVCS at work, which are all one time costs instead of long-term ones, and compared to the troubles caused by other workflows, these one time costs are really trivial Leveraging the issues and pull requests features(but using blames as well might be just too much) in any decent DVCS, any staff can raise concerns on the SOP, and they'll either be solved, or at least the problems will become clear for everyone involved, so this should be more effective and efficient than just verbal reflections towards any particular colleagues and/or supervisors on difficulties faced(if called for, anonymous issues and pull requests can even be used, although it'd seem to be gone overboard) So the detailed implementation of the new workflow can be something like this: The briefing before starting the work of the day should still take place, as it can be used to emphasize the most important SOP changes and/or the recent mistakes made by colleagues(as blames not pointing to anyone specific) in the DVCS, so the staff don't have to check all the recent diffs themselves Whenever you're free, you can make use of the time to check the parts in the SOP of your concern from the computer in your booth, including parts being unclear to you, recent changes, and even submit an anonymous issue for difficulties you faced on trying to follow those parts of the SOP(or you can try to answer some issues in the DVCS made by the others as a means of helping them without having to leave your booth or explicitly voice out to avoid disturbing the others) When you're facing a customer right in front of you and you're unsure what to do next, you can simply ask the customer to wait for a while and check the involved parts of the SOP without the customer even noticing(you can even use issues to ask for help and hope there are colleagues that are free and will help you quickly), thus minimizing the damages caused to the customer service quality To prevent the DVCS from being abused by some staff members as a poor man's chat room at work, the supervisors can periodically check a small portions of the issues, blames and pull requests there as samples to see if they're just essentially conversations unrelated to work, and the feature of anonymity can be suspended for a while if those abusers abuse this as well(if they don't use anonymity when making those conversations, then the supervisors can apply disciplinary actions towards them directly), but don't always check all of them or those supervisors would be exhausted to death due to the potentially sheer number of such things Of course, you still have to try to master the SOP yourselves, as the presence of this DVCS, which is just meant to be an AUXILIARY of your memory, doesn't mean you don't have to remember anything, otherwise you'd end up constantly asking the customer to have unnecessary waits(to check the SOP) and asking colleagues redundant questions(even with minimal disruptions), causing you to become so ineffective and inefficient all the time that you'll still end up being fired in no time Of course, it's easier said than be done in the real world, because while setting up a DVCS and training new comers to use it are both easy, simple and small tasks, the real key that makes things complicated and convoluted is the willingness for the majority to adopt this totally new way of doing things, because it's such a grand paradigm shift that's wholeheartedly alien to most of those not being software engineers(when even quite some software engineers still reject DVCS in situations clearly needing it, just think about the resistance imposed by the outsiders). Also, there are places where DVCS just isn't suitable at all, like emergency units having to strictly follow SOPs, because the situations would be too urgent for them to check the SOP in DVCS even if they could use their mobile phones under such circumstances, and these are some cases where they do have to ALWAYS have ABSOLUTELY ACCURATE memories, as it's already the least evil we've known so far(bear in mind that they'd already have received extensive rigorous training for months or even years before being put into actions) Nevertheless, I still believe that, if some big companies having nothing to do with software engineering are brave enough to use some short-term projects as pilot schemes on using DVCS to manage their SOPs of their staffs, eventually more and more companies will realize the true value of this new ways of doing things, thus causing more and more companies to follow, eventually to the point that this becomes the norm across multiple industries, just like a clerk using MS Office in their daily works. To conclude, I think that DVCS can at least be applied to manage some SOPs of some businesses outside of software engineering, and maybe it can be used for many other aspects of those industries as well, it's just that SOP management is the one that I've personally felt the enormous pain of lacking DVCS when it's obviously needed the most.
  6. Just read some of my game codes written by myself 4 years ago, and now I don't understand them at all lol

    1. PhoenixSoul

      PhoenixSoul

      I can definitely relate to forgetting things like that.

    2. Kayzee

      Kayzee

      I don't find that to be much of a problem myself. :3

       

      Edit: Which is good given how much I slack on working on my game. Some parts of the code probably are 4+ years old!

  7. Updates * v1.00d(GMT 0600 19-5-2021): * 1. Fixed all notetags not working bug
  8. Updates * v1.00b(GMT 0300 27-Mar-2020): * 1. You no longer have to edit the value of * DoubleX_RMMZ.Preloaded_Resources_File when changing the plugin file * name * 2. Fixed the crashes when preloading animations, images, etc, wthout * hues(such cases will be understood as having the default hue 0 only)
  9. Let's say that there's a reproducible fair test with the following specifications: The variable to be tested is A All the other variables as a set B is controlled to be K When A is set as X, the test result is P When A is set as Y, the test result is Q Then can you always safely claim that, X and Y must universally lead to P and Q respectively, and A is solely responsible for the difference between P and Q universally? If you think it's a definite yes, then you're probably oversimplifying control variables, because the real answer is this: When the control variables are set as K, then X and Y must lead to P and Q respectively. Let's show you an example using software engineering(Test 1) : Let's say that there's a reproducible fair test about the difference of impacts between procedural, object oriented and functional programming paradigms on the performance of the software engineering teams, with the other variables, like project requirements, available budgets, software engineer competence and experience, software engineering team synergy, etc, controlled to be the same specified constants, and the performance measured as the amount and the importance of the conditions and constraints fulfilled in the project requirements, the budget spent(mainly time), the amount and the severity of unfixed bugs, etc. The result is that, procedural programming always performs the best in all the project requirement fulfillment, budget consumption, with the least amount of bugs, all being the least severe, and the result is reproducible, and this result seems to be scientific right? So can we safely claim that procedural programming always universally performs the best in all those regards? Of course it's absurd to the extreme, but those experiments are indeed reproducible fair tests, so what's really going on? The answer is simple - The project requirements are always(knowingly or unknowingly) controlled to be those inherently suited for procedural programming, like writing the front end of an easy, simple and small website just for the clients to conveniently fill in some basic forms online(like back when way before things like Google form became a real thing), and the project has to be finished within a very tight time scope. In this case, it's obvious that both object oriented and functional programming would be overkill, because the complexity is tiny enough to be handled by procedural programming directly, and the benefits of both of the former need time to materialize, whereas the tight time scope of the project means that such up front investments are probably not worth it. If the project's changed to write a 3A game, or a complicated and convoluted full stack cashier and inventory management software for supermarkets, then I'm quite sure that procedural programming won't perform the best, because procedural programming just isn't suitable for writing such software(actually, in reality, the vast majority of practical projects should be solved using the optimal mix of different paradigms, but that's beyond the scope of this example). This example aims to show that, even a reproducible fair test isn't always accurate when it comes to drawing universal conclusions, because the contexts of that test, which are the control variables, also influence the end results, so the contexts should always be clearly stated when drawing the conclusions, to ensure that those conclusions won't be applied to situations where those conclusions no longer hold. Another example can be a reproducible fair test examining whether proper up front architectural designs(but that doesn't mean it must be waterfall) are more productive than counterproductive, or visa versa(Test 2) : If the test results are that it's more productive than counterproductive, then it still doesn't mean that it's universally applicable, because those project requirements as parts of the control variables can be well-established and being well-known problems with well-known solutions, and there has never been abrupt nor absurd changes to the specifications. Similarly, if the test results are that it's more counterproductive than productive, then it still doesn't mean that it's universally applicable, because those project requirements as parts of the control variables can be highly experimental, incomplete and unclear in nature, meaning that the software engineering team must first quickly explore some possible directions towards the final solution, and perhaps each direction demands a PoC or even a MVP to be properly evaluated, so proper architectural designs can only be gradually emerged and refined in such cases, especially when the project requirements are constantly adjusted drastically. If an universally applicable conclusion has to be reached, then one way to solve this is to make even more fair tests, but with the control variables set to be different constants, and/or with different variables to be tested, to avoid conclusions that actually just apply to some unstated contexts. For instance, in Test 2, the project nature as the major part of the control variables can be changed, then one can check if the following new reproducible fair tests testing the productivity of proper up front architectural designs will have changed results; Or in Test 1, the programming paradigm to be used can become a part of the control variables, whereas the project nature can become the variable to be tested in the following new reproducible fair tests. Of course, that'd mean a hell lot of reproducible fair tests to be done(and all those results must be properly integrated, which is itself a very complicated and convoluted matter), and the difficulties and costs involved likely make the whole thing too infeasible to be done within a realistic budget in the foreseeable future, but it's still better than making some incomplete tests and falsely draw universal conclusions from them, when those conclusions can only be applied to some contexts(and those contexts should be clearly stated). Therefore, to be practical while still respectful to the truth, until the software engineering industry can finally perform complete tests that can reliably draw actually universal conclusions, it's better for the practitioners to accept that many of the conclusions there are still just contextual, and it's vital for us to carefully and thoroughly examine our circumstances before applying those situational test results. For example, JavaScript(and sometimes even TypeScript), is said to suck very hard, partly because there are simply too many insane quirks, and writing JavaScript is like driving without any traffic rules at all, so it's only natural that we should avoid JavaScript as much as we can right? However, to a highly devoted, diligent and disciplined JavaScript programmer, JavaScript is one of the few languages that provide the amount of control and freedom that are simply unthinkable in many other programming languages, and such programmers can use them extremely effectively and efficiently, all without causing too much technical debts that can't be repaid on time(of course, it's only possible when such programmers are very experienced in JavaScript and care a great deal about code qualities and architectural designs). The difference here is again the underlying context, because those blaming JavaScript might be usually working on large projects(like those way beyond the 10M LoC scale) with large teams(like way beyond 50 members), and it'd be rather hard to have a team with all members being highly devoted, diligent and disciplined, so the amount of control and freedom offered by JavaScript will most likely lead to chaos; Whereas those praising JavaScript might be usually working alone or with a small team(like way less than 10 members) on small projects(like those way less than the 100k LoC scale), and the strict rules imposed by many statically strong typed languages(especially Java with checked exceptions) may just be getting in their way, because those restrictions lead to up front investments, which need time and project scale to manifest their returns, and such time and project scale are usually lacking in small projects worked by small teams, where short-term effectiveness and efficiency is generally more important. Do note that these opinions, when combined, can also be regarded as reproducible fair tests, because the amount of coherent and consistent opinions on each side is huge, and many of them won't have the same complaint or compliment when only the languages are changed. Therefore, it's normally pointless to totally agree or disagree on a so-called universal conclusion about some aspects on software engineering, and what's truly meaningful instead is to try to figure out the contexts behind those conclusions, assuming that they're not already stated clearly, so we can better know when to apply those conclusions and when to apply some others. Actually, similar phenomenons exist outside of software engineering. For instance, let's say there's a test on the relations between the number of observers of a knowingly immoral wrongdoing, and the percentage of them going to help the victims and stop the culprits, with the entire scenes under the watch of surveillance cameras, so those recordings are sampled in large amounts to form reproducible fair tests. Now, some researchers claim that the results from those samplings are that, the more the observers are out there, the higher the percentage of them going to help the victims and stop the culprits, so can we safely conclude that the bystander effect is actually wrong? It at least depends on whether those bystanders knew that those surveillance cameras did exist, because if they did know, then it's possible that those results are affected by hawthorne effect, meaning that the percentage of them going to help the victims and stop the culprits could be much, much lower if there were no surveillance cameras, or they didn't know those surveillance cameras did exist(but that still doesn't mean the bystander effect is right, because the truth could be that the percentage of bystanders going to help the victims has little to do with the number of bystanders). In this case, the existence of those surveillance cameras is actually a major part of the control variables in those reproducible fair tests, and this can be regarded as an example of the observer's paradox(whether this can justify the more and more numbers of surveillance cameras everywhere are beyond the scope of this article). Of course, this can be rectified, like trying to conceal those surveillance cameras, or finding some highly trained researchers to regularly record places that are likely to have culprits openly hurting victims with a varying number of observers, without those observers knowing the existence of those researchers, but needless to say, these alternatives are just so unpragmatic that no one will really do it, and they'll also pose even greater problems, like serious privacy issues, even if they could be actually implemented. Another example is that, when I was still a child, I volunteered into a research of the sleep quality of children in my city, and I was asked to sleep in a research center, meaning that my sleeping behaviors will be monitored. I can still vaguely recall that I ended up sleeping quite poorly at that night, despite the fact that both the facilities(especially the bed and the room) and the personnel there are really nice, while I sleep well most of the time back when I was a child, so such a seemingly strange result was probably because I failed to quickly adapt to a vastly different sleeping environment, regardless of how good that bed in that research center was. While I can vaguely recall that the full results of the entire study of all children volunteered was far from ideal, the changes of the sleeping environment still played as a main part of the control variables in those reproducible fair tests, so I still wonder whether the sleep qualities the children in my city back then were really that subpar. To mitigate this, those children could have been slept in the research center of many, instead of just 1, nights, in order to eliminate the factor of having to adapt to a new sleeping environment, but of course the cost of such researches to both the researchers and the volunteers(as well as their families) would be prohibitive, and the sleep quality results still might not hold when those child go back to their original sleeping environment. Another way might be to let parents buy some instruments, with some training, to monitor the sleep qualities of their children in their original sleeping environment, but again, the feasibility of such researches and the willingness of the parents to carry them out would be really great issues. The last example is the famous Milgram experiment, does it really mean most people are so submissive to their perceived authorities when it comes to immoral wrongdoings? There are some problems to be asked, at least including the following: Did they really think the researchers would just let those victims die or have irreversible injuries due to electric shocks? After all, such experiments would likely be highly illegal, or at least highly prone to severe civil claims, meaning that it's only natural for those being researched to doubt the true nature of the experiment. Did those fake electric shocks and fake victims act convincing enough to make the experiment look real? If those being researched figured out that those are just fakes, then the meaning of the whole experiment would be completed changed. Did those being researched(the "teachers") really don't know they're actually the ones being researched? Because if those "students" were really the ones being researched, why would the researchers need extra participants to carry out the experiments(meaning that the participants would wonder the necessity of some of them being "teachers", and why not just make them all "students" instead)? Assuming that the whole "teachers" and "students" things, as well as the electric shocks are real, did those "students" sign some kind of private but legally valid consents proving that they knew they were going to receive real electric shocks when giving wrong answers, and they were willing to face them for the research? If those "teachers" had reasons to believe that this were the case, their behaviors would be really different from those in their real lives. In this case, the majority of the control variables in those reproducible fair tests are the test setups themselves, because such experiments would be immoral to the extreme if those being researched truly did immoral wrongdoings, meaning that it'd be inherently hard to properly establish a concrete and strong causation between immoral wrongdoings and some other fixed factors, like the submissions to the authorities. Some may say that those being researched did believe that they were performing immoral wrongdoings because of their reactions during the test and the interview afterwards, and those reactions will also manifest when someone does do some knowingly immoral wrongdoings, so the Milgram experiment, which is already reproduced, still largely holds. But let's consider this thought experiment - You're asked to play an extremely gore, sadistic and violent VR game with the state of the art audios, immersions and visuals, with some authorities ordering you to kill the most innocent characters with the most brutal means possible in that game, and I'm quite certain that many of you would have many of the reactions manifested by those being researched in the Milgram experiment, but that doesn't mean many of you will knowingly perform immoral wrongdoings when being submissive to the authority, because no matter how realistic those actions seem to be, it's still just a game after all. The same might hold for Milgram experiment as well, where those being researched did know that the whole thing's just a great fake on one hand, but still manifested reactions that are the same as someone knowingly doing some immoral wrongdoings on the other, because the fake felt so real that their brains got cheated and showed some real emotions to some extent despite them knowing that it's still just a fake after all, just like real immense emotions being evoked when watching some immensely emotional movies. It doesn't mean the Milgram experiment is pointless though, because it at least proves that being submissive to the perceived or real authorities will make many people do many actions that the latter wouldn't normally do otherwise, but whether such actions include knowingly immoral wrongdoings might remain inconclusive from the results of that experiment(even if authorities do cause someone to do immoral wrongdoings that won't be done otherwise, it could still be because that someone really doesn't know that they're immoral wrongdoings due to the key information being obscured by the authorities, rather than being submissive to those authorities even though that someone knows that they're immoral wrongdoings). Therefore, to properly establish a concrete and strong causation between knowingly immoral wrongdoings and submissions to the perceived or real authorities, we might have to investigate actual immoral wrongdoings in real life, and what parts of the perceived or real authorities were playing in those incidents. To conclude, those making reproducible fair tests should clearly state their underlying control variables when drawing conclusions when feasible, and those trying to apply those conclusions should be clear on their circumstances to determine whether those conclusions do apply under those situations they're facing, as long as the time needed for such assessments are still practical enough in those cases.
  10. DoubleX

    DoubleX RMMV Skill Hotkeys

    Updates * v1.01b(GMT 0400 13-2-2021): * 1. Fixed the bug of being able to select unusable hotkey skills in the * skill window in battles
  11. Updates * { codebase: "1.1.1", plugin: "v1.02a" }(2021 Feb 7 GMT 1300): * 1. Added skillItemCooldownGaugeColor1 and skillItemCooldownGaugeColor2 * to let you show the TPB battler cooldown bar inside battles with * configurable colors * 2. Added cancelBattlerCooldownHotkeys and * cancelSkillItemCooldownHotkeys to let you set some hotkeys to * cancel the battler/skill item cooldown of the corresponding actors * respectively * 3. Added the following parameters: * - canCancelBattlerCooldown * - canCancelSkillItemCooldown * - cancelBattlerCooldownFail * - cancelSkillItemCooldownFail * - cancelBattlerCooldownSuc * - cancelSkillItemCooldownSuc * - canCancelBattlerCooldownNotetagDataTypePriorities * - canCancelSkillItemCooldownNotetagDataTypePriorities * - cancelBattlerCooldownFailNotetagDataTypePriorities * - cancelSkillItemCooldownFailNotetagDataTypePriorities * - cancelBattlerCooldownSucNotetagDataTypePriorities * - cancelSkillItemCooldownSucNotetagDataTypePriorities * 4. Added the following plugin commands: * - canCancelBattlerCooldown * - canCancelSkillItemCooldown * - cancelBattlerCooldown * - cancelSkillItemCooldown * 5. Added the following notetags: * - canCancelBattler * - canCancelSkillItem * - cancelBattlerFail * - cancelSkillItemFail * - cancelBattlerSuc * - cancelSkillItemSuc Video
  12. I just received an email like this: Title: Notification Case #(Some random numbers) Sender: (Non-Paypal logo)service@paypal.com.(My PayPal account location) <(Non-PayPal email used by the real scammers)> Recipients: (My email), (The email of an innocent straw man used by the real scammers) Contents(With UI styles copying those in real PayPal emails) : Someone has logged into your account We noticed a new login with your PayPal account associated with (The email of an innocent straw man used by the real scammers) from a device we don't recognize. Because of that we've temporarily limited your account until you renew and verify your identity. Please click the button below to login into your account for verify your account. (Login button copying that in real Paypal emails) If this was you, please disregard this email. (Footers copying those in real PayPal emails) I admit that I'm incredibly stupid, because I almost believed that it's a real PayPal email, and I only realized that it's a scam right after I've clicked the login button, because it links to a URL that's completely different from the login page of the real PayPal(so fortunately I didn't input anything there). While I've faced many old-schooled phishing emails and can figure them all out right from the start, I've never seen phishing emails like this, and what makes me feel even more dumb is that I already have 2FA applied to my PayPal account before receiving this scam email, meaning that my phone would've a PayPal verification SMS out of nowhere if there was really an unauthorized login to my account. Of course, that straw man email owner is completely innocent, and I believe that owner already received the same scam email with me being the straw man, so that owner might think that I really performed unauthorized login into his/her PayPal account, if he/she didn't realize that the whole email's just a scam. Before I realized that it's just a scam, I thought he/she really done what the email claims as well, so I just focused on logging into my PayPal accounts to assess the damages done and evaluate countermeasures to be taken, and if I didn't realize that it's just a scam, I'd already have given the password of my PayPal account to the scammers in their fake PayPal login page. I suspect that many more PayPal users might have already received/are going to receive such scam emails, and I think this way of phishing can work for many other online payment gateways as well, so I think I can do some good by sharing my case, to hope that only I'll be this dumb(even though I didn't give the scammers my Paypal password at the end).
  13. The complete microsoft word file can be downloaded here(as a raw file) Summary The whole password setup/change process is as follows: 1. The client inputs the user ID and its password in plaintext 2. A salt for hashing the password in plaintexts will be randomly generated 3. The password will be combined with a fixed pepper in the client software source code and the aforementioned salt, to be hashed in the client terminal by SHA3-512 afterwards 4. The hashed password as a hexadecimal number with 128 digits will be converted to a base 256 number with 64 digits, which will be repeated 8 times in a special manner, and then broken down into a list of 512 literals, each being either numeric literals 1 to 100 or any of the 156 named constants 5. Each of those 512 numeric literals or named constants will be attached with existing numeric literals and named constants via different ways and combinations of additions, subtractions, multiplications and divisions, and the whole attachment process is determined by the fixed pepper in the client software source code 6. The same attachment process will be repeated, except that this time it’s determined by a randomly generated salt in the client terminal 7. That list of 512 distinct roots, with the ordering among all roots and all their literal expressions preserved, will produce the resultant polynomial equation of degree 512 8. The resultant polynomial equation will be encoded into numbers and number separators in the client terminal 9. The encoded version will be encrypted by RSA-4096 on the client terminal with a public key there before being sent to the server, which has the private key 10. The server decrypts the encrypted polynomial equation from the client with its RSA-4096 private key, then decode the decrypted version in the server to recover the original polynomial equation, which will finally be stored there 11. The 2 aforementioned different salts will be encrypted by 2 different AES-256 keys in the client software source code, and their encrypted versions will be sent to the server to be stored there 12. The time complexity of the whole process, except the SHA3-512, RSA-4096 and AES-256, should be controlled to quadratic time The whole login process is as follows: 1. The client inputs the user ID and its password in plaintext 2. The client terminal will send the user ID to the server, which will send its corresponding salts for hashing the password in plaintexts and forming distinct roots respectively, already encrypted in AES-256 back to the client terminal, assuming that the user ID from the client does exist in the server(otherwise the login fails and nothing will be sent back from the server) 3. The password will be combined with a fixed pepper in the client software source code, and the aforementioned salt that is decrypted in the client terminal using the AES-256 key in the client software source code, to be hashed in the client terminal by SHA3-512 afterwards 4. The hashed password as a hexadecimal number with 128 digits will be converted to a base 256 number with 64 digits, which will be repeated 8 times in a special manner, and then broken down into a list of 512 literals, each being either numeric literals 1 to 100 or any of the 156 named constants 5. Each of those 512 numeric literals or named constants will be attached with existing numeric literals and named constants via different ways and combinations of additions, subtractions, multiplications and divisions, and the whole attachment process is determined by the fixed pepper in the client software source code 6. The same attachment process will be repeated, except that this time it’s determined by the corresponding salt sent from the server that is decrypted in the client terminal using a different AES-256 key in the client software source code 7. That list of 512 distinct roots, with the ordering among all roots and all their literal expressions preserved, will produce the resultant polynomial equation of degree 512 8. The resultant polynomial equation will be encoded into numbers and number separators in the client terminal 9. The encoded version will be encrypted by RSA-4096 on the client terminal with a public key there before being sent to the server, which has the private key 10. The server decrypts the encrypted polynomial equation from the client with its RSA-4096 private key, then decode the decrypted version in the server to recover the original polynomial equation 11. Whether the login will succeed depends on if the literal expression of the polynomial equation from the client exactly matches the expected counterpart already stored in the server 12. The time complexity of the whole process, except the SHA3-512, RSA-4096 and AES-256, should be controlled to quadratic time For an attacker trying to get the raw password in plaintext: 1. If the attacker can only sniff the transmission from the client to the server to get the encoded then encrypted version(which is then encrypted by RSA-4096) of the polynomial equation, the salt of its roots, and the counterpart for the password in plaintext, the attacker first have to break RSA-4096, then the attacker has to figure out the highly secret and obfuscated algorithm to decode those numbers and number separators into the resultant polynomial equation and the way its roots are attached by existing numeric literals and named constants 2. If the attacker has the resultant polynomial equation of degree 512, its roots must be found, but there’s no direct formula to do so analytically due to Abel-Ruffini theorem, and factoring such a polynomial with 156 different named constants efficiently is very, very complicated and convoluted 3. If the attacker has direct access to the server, the expected polynomial equation can be retrieved, but the attacker still has to solve that polynomial equation of degree 512 to find all its roots with the right ordering among them and all their correct literal expressions 4. If the attacker has direct access to the client software source codes, the pepper for hashing the password in plaintext, the pepper used on the polynomial equation roots, and the highly secret and obfuscated algorithm for using them with the salt counterparts can be retrieved, but it’s still far from being able to find all the roots of the expected polynomial equation of degree 512 5. If the attacker has all those roots, the right ordering among them and all their correct literal expressions still have to be figured out, and the salts and peppers for those roots has to be properly removed as well 6. If the attacker has all those roots with the right ordering among them, all their correct literal expressions, and salts and peppers on them removed, the attacker has effectively recovered the hashed password, which is mixed with salts and peppers in plaintext 7. The attacker then has to figure out the password in plaintext even with the hashing function, salt, pepper, and the highly secret and obfuscated algorithm that combines them known 8. Unless there are really efficient algorithms for every step involved, the time complexity of the whole process can be as high as factorial time 9. As users are still inputting passwords in plaintexts, dictionary attacks still work to some extent, but if the users are careless with their password strengths, then no amount of cryptography will be safe enough 10. Using numerical methods to find all the roots won’t work in most cases, because such methods are unlikely to find those roots analytically, let alone with the right ordering among them and all their right literal expressions, which are needed to produce the resultant polynomial equation with literal expressions exactly matching the expected one 11. Using rainbow tables won’t work well either, because such table would be way too large to be used in practice, due to the number of polynomial equations with degree 512 being unlimited in theory 12. Strictly speaking, the whole password encryption scheme isn’t a one-way function, but the time complexity needed for encryption compared to that for decryption is so trivial that this scheme can act like such a function Areas demanding further researches: 1. The time complexity for factoring a polynomial of degree n with named constants into n factors analytically 2. Possibilities of collisions from the ordering among all roots and all their different literal expressions 3. Existence of efficient algorithms on finding the right ordering among all roots and all their right literal expressions 4. Strategies on setting up the fixed peppers and generating random salts to form roots with maximum encryption strength Essentially, the whole approach on using polynomial equations for encryptions is to exploit equations that are easily formed by their analytical solution sets but very hard to solve analytically, especially when exact literal matches, rather than just mathematical identity, are needed to match the expected equations. So it’s not strictly restricted to polynomial equations with a very high degree, but maybe very high order partial differential equations with many variables, complex coefficients and functions accepting complex numbers can also work, because there are no known analytical algorithm on solving such equations yet, but analytical solutions are demanded to reproduce the same partial differential equations with exact literal matches, as long as performing partial differentiations analytically can be efficient enough.
  14. Updates * { codebase: "1.1.1", plugin: "v1.01a" }(2020 Dec 26 GMT 1300): * 1. Added the following notetag types: * subjectMiss * subjectEva * subjectCnt * subjectMrf * subjectCri * subjectNorm * subjectSubstitute * 2. Added the following parameters: * subjectMissNotetagDataTypePriorities * subjectEvaNotetagDataTypePriorities * subjectCntNotetagDataTypePriorities * subjectMrfNotetagDataTypePriorities * subjectCriNotetagDataTypePriorities * subjectNormNotetagDataTypePriorities * subjectSubstituteNotetagDataTypePriorities * 3. Fixed the eventEntry of all notetags not correctly accepting all * intended suffixes and rejecting the unintended ones Video The latest version of DoubleX RMMZ Enhanced Codebase is needed as well :)
×
Top ArrowTop Arrow Highlighted