Skip to main content

Some coding convention rules, working effectively with Legacy code

 https://softwareengineering.stackexchange.com/questions/133404/what-is-the-ideal-length-of-a-method-for-you

Method length should be 15 (soft limit).

We can do “testing to detect change.” In traditional terms, this is called regression testing. We periodically run tests that check for known good behavior to find out whether our software still works the way that it did in the past.

Let’s do a little thought experiment. We are stepping into a large function

that contains a large amount of complicated logic. We analyze, we think, we

talk to people who know more about that piece of code than we do, and then

we make a change. We want to make sure that the change hasn’t broken anything, but how can we do it? Luckily, we have a quality group that has a set of

regression tests that it can run overnight. We call and ask them to schedule a

run, and they say that, yes, they can run the tests overnight, but it is a good

thing that we called early. Other groups usually try to schedule regression runs

in the middle of the week, and if we’d waited any longer, there might not be a

Software Vise

vise (n.). A clamping device, usually consisting of two jaws closed or opened by a

screw or lever, used in carpentry or metalworking to hold a piece in position. The

American Heritage Dictionary of the English Language, Fourth Edition

When we have tests that detect change, it is like having a vise around our code. The

behavior of the code is fixed in place. When we make changes, we can know that

we are changing only one piece of behavior at a time. In short, we’re in control of

our work.

From the Library of Brian Watterson

ptg9926858

WORKING WITH FEEDBACK 11

Working with

Feedback

timeslot and a machine available for us. We breathe a sigh of relief and then go

back to work. We have about five more changes to make like the last one. All of

them are in equally complicated areas. And we’re not alone. We know that several other people are making changes, too.

The next morning, we get a phone call. Daiva over in testing tells us that

tests AE1021 and AE1029 failed overnight. She’s not sure whether it was our

changes, but she is calling us because she knows we’ll take care of it for her.

We’ll debug and see if the failures were because of one of our changes or someone else’s.

Does this sound real? Unfortunately, it is very real. 



We get ready to make our change, but we realize that it is pretty hard to figure out how to change it. The code is unclear, and we’d really like to understand it better before making our change. The tests won’t catch everything, so

we want to make the code very clear so that we can have more confidence in

our change. Aside from that, we don’t want ourselves or anyone else to have to

go through the work we are doing to try to understand it. What a waste of

time!


Unit testing is one of the most important components in legacy code work.

System-level regression tests are great, but small, localized tests are invaluable.

They can give you feedback as you develop and allow you to refactor with

much more safety


Testing in isolation is an important part of the definition of a unit test, but

why is it important? After all, many errors are possible when pieces of software

are integrated.


Note some issues with real project.

a. Team B (far away 4 time zone) have bad code practice + bad structure, design simple logic. This lead to code bad base. Missing logic, step (ie. Many SQL update run on the fly not documented or noted to any where in the code base or similar).
So "when start with legacy code, we have to test it." Within the knowledge of the system, we can. But this case we do not know or fully understanding the business logic (system behaviors). BA, owner (14 time zone away)... not documented it properly.

This lead to missing system behaviours / logic. And by common sense we think it is easy to capture (ie. many basic import/export CRUD task). But the way legacy code structure + mixed abnormally, special cases (old system have much of them) lead to 'simple task' become complicated.

2 ways: based on old code => run + test, compare data if missing/diff => trace + fix.
other way = rewrite then run / compare result => test + fix.

Both way still have issues: The INPUT change (ie. CSV input may be different of column, data type each time, float number etc...), so the OUTPUT also change a bit.
Without history of change (Git, or some kind of version control) we do not sure current code base work with future INPUT change / cases / special.

Comments

Popular posts from this blog

Rand mm 10

https://stackoverflow.com/questions/2447791/define-vs-const Oh const vs define, many time I got unexpected interview question. As this one, I do not know much or try to study this. My work flow, and I believe of many programmer is that search topic only when we have task or job to tackle. We ignore many 'basic', 'fundamental' documents, RTFM is boring. So I think it is a trade off between the two way of study language. And I think there are a bridge or balanced way to extract both advantage of two method. There are some huge issue with programmer like me that prevent we master some technique that take only little time if doing properly. For example, some Red Hat certificate program, lesson, course that I have learned during Collage gave our exceptional useful when it cover almost all topic while working with Linux. I remember it called something like RHEL (RedHat Enterprise Linux) Certificate... I think there are many tons of documents, guide n books about Linux bu

Martin Fowler - Software Architecture - Making Architecture matter

  https://martinfowler.com/architecture/ One can appreciate the point of this presentation when one's sense of code smell is trained, functional and utilized. Those controlling the budget as well as developer leads should understand the design stamina hypothesis, so that the appropriate focus and priority is given to internal quality - otherwise pay a high price soon. Andrew Farrell 8 months ago I love that he was able to give an important lesson on the “How?” of software architecture at the very end: delegate decisions to those with the time to focus on them. Very nice and straight-forward talk about the value of software architecture For me, architecture is the distribution of complexity in a system. And also, how subsystems communicate with each other. A battle between craftmanship and the economics and economics always win... https://hackernoon.com/applying-clean-architecture-on-web-application-with-modular-pattern-7b11f1b89011 1. Independent of Frameworks 2. Testable 3. Indepe