Kaed Posted May 12, 2019 Posted May 12, 2019 (edited) I've noticed a few PRs over the last month where someone adds in a new feature without doing anything to tidy up consequences with their PR, such as completely screwing up balance without adding anything new, (Ala the late LordFowl PR about removing borgs and doing literally nothing else to compensate) and just recently had to do some cleanup after the pen-clicking PR caused all children of the pen object, such as chalk or crayons, to also click like pens. I also recently put through a PR that severely reduced the damage of spears and pikes, which were apparently flat out ported in without any consideration to what was actually written there potentially causing problems. If someone is going to contribute code we need to start encouraging them to have a better idea of what they are doing and the consequences of their code changes instead of just chuckling and pushing through the latest meme PR because it's funny unless someone points out the problems to them in the PR thread. It was only a couple of lines of code to fix the chalk clicking bug and the ridiculous damage of diamond/plasteel spears, but the problem could be a lot more difficult to fix the next time a poorly thought out PR change is put through because no one took the time to sit down and study the effects of it on the rest of the code or the game balance. It might even go unnoticed for months if it's something obscure and rarely used in-game. For instance, it took months after someone added in telegibbing before we discovered it was possible for ninjas to telegib people they have in a grab with proper positioning. There's still a bug report up about that, sitting there unfixed. Presumably because no one really knows how. And we still have situations on station where it's too dangerous to use some of the default teleport beacons on the map because they're too close to mapped in windows and objects that can telegib you completely out of your control. Important questions we should be having during a PR (and probably more): Does the object I modified to have new features have children in the code, and is this new feature appropriate for those children? Do I know what all the values of this item actually are, if it's pulling information from other parts of the code that aren't right in front of me? Have I considered what removing/adding this feature might do to gameplay balance or am I acting purely out of frustration for something that happened in a round? Do I actually know everything that is going to be affected by this change, or am I just assuming? It's great that we have an open source community where anyone can contribute, but we need to be a little more responsible about what we put forward. Edited May 12, 2019 by Kaed
Skull132 Posted May 12, 2019 Posted May 12, 2019 There's at least widely different points conflated within this post. Feature completeness. Ala, "Don't remove something unless you have something to replace it." Depending on the PR in question, this is a valid point: a PR should be feature complete and usually this is enforced. However, note that this is a subjective metric so we may well have different views on what requires a replacement. Somethings don't, and removal alone is sufficient. Follow-up. Or, well, bug fixes. This is probably the largest issue presented here. As we slowly accrue fresh contributors (and I do mean fresh), the amount of bugs generated by them is also going to accrue. Specially from small silly shit, wherein the bugs aren't actually bad or game breaking, and thus aren't grovelled over by the playerbase at large. A proposal, one which might be well worth considering if this trend continues, is the addition of "gud coder points". Wherein your ability to submit "Feature" PRs gets automatically limited, until you submit "Bugfix" PRs. Another point of follow-up is indeed balance. I see largely two approaches to this: either maintainers begin more actively supervising balance oriented PRs, at which point they will become more sparse and more difficult to get through; or we uh. Don't do anything and pray for the best. There's no really good way to handle this that is not a mix of these two (larger balance changes get more oversight, and are thus slower), which is what we have now, anyways. Review and testing. Your list, effectively. Point one, mistakes happen, and perhaps the relief from point 2 would be able to assist with that. We do test-merge large PRs now, but obviously we can't apply this to every single PR forever. So ye, I do not have many proposals for this point.
Kaed Posted May 12, 2019 Author Posted May 12, 2019 (edited) The idea of 'gud coder points' is sort of interesting, but I feel like it would be kind of turnoff to fresh contributors. I don't want to necessarily drive people away by forcing them to eat their brussels sprouts before they can get their dessert, but I would like to try and encourage a mindset of taking a few minutes to to at least sift through code and see where instances of the thing you changed shows up, and consider whether you need to add stuff into those spots or revise them to avoid bugs. Like... making sure that your steak is fully cooked before you eat it. Food analogies. I think the problem that is causing so many bugs is that new people have a very myopic viewpoint on submitting PRs. They know a thing they want to change and they focus specifically on changing that thing, without investigating what and where said thing is used. This 'checklist' sort of behavior would also help people understand the structure of the code a little better even if they aren't trying to change it all at once. Edited May 12, 2019 by Kaed
Skull132 Posted May 12, 2019 Posted May 12, 2019 1 hour ago, Kaed said: I don't want to necessarily drive people away by forcing them to eat their brussels sprouts before they can get their dessert. But like. This is the main concern raised in the post. And there is no way to counter it beyond forcing people (or incentivizing) to fix bugs if they wish to do grander things. 1 hour ago, Kaed said: I think the problem that is causing so many bugs is that new people have a very myopic viewpoint on submitting PRs. They know a thing they want to change and they focus specifically on changing that thing, without investigating what and where said thing is used. This happens to even professional coders. The butterfly effect is a metric ass and the most concrete way to counter it is to install a battery of tests. We don't really have that. Instead, we have peer review, which brings us to the following point. 1 hour ago, Kaed said: This 'checklist' sort of behavior would also help people understand the structure of the code a little better even if they aren't trying to change it all at once. How would it change the status quo? If the person cannot intuitively understand that his code is about to invoke the butterfly effect on scale already, then he will simply skip over these questions unless someone else verifies the answers to these questions. Even requiring that all features be "tested" doesn't work, because testing is actually a relatively complex skill: it is typical that only a positive result is looked for (eg. my feature works), but it is common for people to neglect negative results and surrounding results. This is a methodology which can only be learned through trial and error, IMO; and will, at the end of the day, still require a code reviewer to pose the exact same questions and to walk through the exact same process.
Screemers Posted May 15, 2019 Posted May 15, 2019 I might be wrong on what I'll be writing next, so don't hesitate to correct me if it is the case. I have looked at the way issues are reported and it seems to only be by the github page where you need to create an account to give some feedback. Won't it be easier to consolidate all reports and feedback about updates on single form that could easily be accessed by everyone from a link in the client? That way it will be easier or someone coding and the others to make a follow-up on the subject and the see the related issue. Proceeding that way may also increase the bug reports from the players and increase feedback about poorly implemented changes when they happen. Like Skull132 said, coding is not that easy to test and a small change you can think is benign may result in a butterfly effect that will cause bugs you would never have suspected. Better communication from the players and increase of bug reports could help to react to those things faster when they happen. My suggestion is not totally on the point of the post Kaed made as it's not a direct sensitive for coders to be more thorough, but a report system of that type could, with time, point out coders that have the most problematic reviews/ most bugs and could help to cut them out. Just to be clearer, the model I have in mind is just a simple database that links coders with their updates, bugs, feedback (and from whom) and maybe some other information depending of the needs.
Recommended Posts