The last 2 weeks have been one of the most fun times working for Docscorp. The team were given the chance to redesign our comparison model to be a better one. At the current state, the model is one big giant blob of procedural functions which are contained in …. 2 CLASSES. Yep that’s right 2 classes. (Now you know how bad it is). Our job was basically to transform these 2 classes into a model that is much more object oriented.
I spent some times to skim through all the functions of the 2 classes. I realized that all the logics are already solid, its just that the code are not as “organized” as it should be. Or in other words, redesigning the current model means One BIG refactoring job needs to be applied to the 2 classes.
I decided to take a stab on it and started refactoring the classes. After a few days, I came up with a good clean model of how it should be done. While refactoring, I realized that all I’m doing is just applying a couple of programming principles to those 2 classes. These principles are:
Yep, those are the only principles that helped me transform/refactor 2 giant procedural classes to be a modular object oriented model.
I think those should be the basis of your decision of when to refactor your code. If you find that you’re repeating yourself or writing the same/similar logic, then it warrants to refactor your code. If you find that your logic/algorithm gets too complicated, it’s time to break it down to make it simpler (doesn’t mean that you have to change your logic). Lastly if you find your code is doing to much or having to much responsibility, it’s time to refactor it out (I know by definition SRP is a class thing, but I think the same logic can be applied to functions).
You can always take things further and apply more programming principles to your code. But these three principles is a must in order to achieve a reasonable clean code. If you apply these three rules in your day to day code, I’m sure overtime you’ll find that your code will be a lot cleaner and much more maintainable. Thus, don’t be afraid to apply these rules to refactor your code.
Saw someone tweet this link. It gives a good and simple explanation of the much misunderstood relationship between HTML and XHTML.
Misunderstanding Markup: [Link]
Saw @MarkHNeedham tweet this link. It contains many good principles that will help you write better quality code. You may already know many of them, but its good to have someone list and briefly summarize them.
Clean Code Cheat Sheet. [link]
Recently there was quite a bit of commotion in regards to code quality in the programming community. The commotion is between two of the most influential people in the programming community, Robert C Martin and Joel Spolsky. Just a brief recap of what happened.
It all started with a podcast by Scott Hanselman talking about SOLID principle with Robert C Martin (aka Uncle Bob). Founders of stackoverflow.com, Joel Spolsky and Jeff Atwood, heard the podcast and made a ad-homimem comment in stackoverflow podcast #38, stating that “Quality just does not matter that much”. Hearing this statement from a high profile developer with years of experience, made Robert Martin back flip. He wrote a blog to counter Atwood’s argument, and expresses his concerns. Joel Spolsky and Jeff Atwood decided to invite Robert Martin as a guest speaker in stackoverflow podcast #41. Before the podcast, Robert Martin posted another blog that acts as an open letter to Jeff and Joel that addresses all the issues he has with their comments.
After listening to all the podcasts and reading all the blog posts, it seems like to me that there’s actually a misunderstanding between the two party. When Jeff said “Quality just does not matter that much”, he did not mean it literally. What he meant was “Quality just does not matter that much when inexperience people try to apply programming principle religiously, that it slows down their productivity and produces no result.”. Joel and Jeff did not say that clearly enough in the podcast, that it sounded like a rather offensive comment towards Robert Martin.
I have to say that I’m 100% agree with Jeff statement. Inexperienced developer that overwhelmed by theories tend to forget that in the end the most important thing is the result. I remember when I first being exposed with design principles, it felt awesome to have learnt and understood something technical. I was so excited that when it came to solving a problem, I became more focused on how to apply the principle rather than actually solving the problem. Focusing more on applying a programming principle is a big problem, because you’re wasting time on something that does not necessarily solve the problem. In the end you did not get anything done and your work shows no result. It is very important to fully understand what programming problem does a principle solve, before we start using it.
So when is it appropriate to follow programming principles? After fully understanding the principle, we should always try to follow those principle whenever possible. Code that conform programming principles will be a lot easier to work with. For example, which one do you prefer, working on a class that is 20k lines long, or working with multiple classes with 2k lines each. I bet you prefer the multiple classes scenario. Even when time is a constraint, we can always try our best to follow good design principles. I think of it as an early investment. It may take a longer time to code a solution at first, but it will make it a lot easier to work with in the future.
Having said that, there are times that may not be appropriate to apply your programming principles. I find it very often that I have to refactor my code when applying programming principles to an old code base. Refactoring is always a risky thing to do, no matter how good your test code coverage is. Thus it is very important to asses how much risk involves to apply the principle. After we identify the magnitude of the risk, we need to asses how important is the build in which we need to patch. If it is a critical build, you might want to save your principle to be applied later on. For example, if you are making a build for deal that’s worth $500k+, introducing risk by refactoring a big chunk of your code, just to apply a programming principle, would be simply stupid. Apply your programming principles wherever and whenever it makes sense.
In this blog post, I’m taking a stand to neither sides. I agree completely with Uncle Bob in regards to code quality. Good programmer must have the craftsmanship passion to write the best code possible all the time. On the other hand, Jeff’s argument is also valid, because there are just times that might not be appropriate to apply programming principles.
Resources on the Uncle Bob vs Joel Spolsky:
- Scott Hanselman podcast with Uncle Bob talking about SOLID principle [link]
- Stackoverflow.com podcast #38 [link]
- Uncle Bob open letter to Joel Spolsky & Jeff Atwood [link]
- Stackoverflow.com podcast #41 [link]
- Jeff Atwood’s “Ferengi Programmer” blog post [link]
- Dhananjay Neneto reaction to Jeff’s “Ferengi Programmer” post [link]
- InfoQ summary on the whole Uncle Bob vs Joel Spolsky [link]
In this page you will find a list of programming articles that I’ve written. I only compile the most interesting ones. This list will get updated monthly.
Uncle Bob keynoting at Oredev in Malmo, Sweden.
Keynote: The Renaissance of Craftsmanship
What does it mean to be a professional software developer? What rules do we follow? What attitudes do we hold? And how can we maintain our professionalism in the face of schedule pressure? In this talk Robert C. Martin outlines the practices used by software craftsmen to maintain their professional ethics. He resolves the dilemma of speed vs. quality, and mess vs schedule. He provides a set of principles and simple Dos and Don’ts for teams who want to be counted as professional craftsmen. [Original Link]
Further resources: Post-event interview by .NetRocks Team. [link]
In my previous blog post, I had a solution to prevent subtle bug when removing and readding an event handler. The solution works, but its not flexible at all. You can see the method is very dependent/attached to the button1 object and the click event, which means that the logic cannot be re-use for other objects or other events. For example, if I want to have the same logic for textchanged, this would mean I have to write another function for the event.
So how can we make it more modular? Well, we need to be able to register and unregister event handler dynamically. This can be easily achieved by using reflection and delegates. Reflection alows us to browse all the events that a type have in a form of EventInfo object. We can then use the EventInfo object to add or remove event handler to the event. To do it, EventInfo requires a delegate of the EventHandler type that represents/points to an instance method to invoke on a class instance. The delegate can be created using Delegate.CreateDelegate method, which takes in the following as its parameters: The type of the delegate, An object that contains the event handler method, the event handler method name. Here is the new code of the Register Click Event.
Be very carefull if you have code like the above example. It may seems ok, because you unregister the event handler at the start of the function, you reregister it again at the end of the function. No event get lost, no drama! Now let say, in that function we add another function called foo. In foo we add the same event handler logic, that is unregistering and re-registering the handler.
The above code has a subtle bug, which is the event handler get registered twice. One time at the end of Foo and the other at the end of Form1_Load function. The impact of this obviously when the event get raised, the handler will be called twice. Okay it may not be very subtle in the above code, but if you have a more complicated function, it's rather hard to track down this bug. It happened to me recently on one of our product code base. What I did to ensure this kind of problem never happens again is to add another function that's responsible in adding the handler to the event. This function will be used always for adding the handler to the event. The code is as below.
UPDATE: A better and more flexible approach can be found here
For the last couple of days, I've been reading on Joel Spolsky blog quite regularly. Just a bit introduction on the guy, He's the CEO of FogCreek Software and one of the team member of the guys on StackOverflow.com. His past experience includes Program Manager of Microsoft Excel team, working in Viacom Interactive Service, and Juno Online ISP.
I enjoyed his blog alot. Well written and very concise articles, which includes examples that is easy to understand. He even used real life example that has happened in the IT industry in the past. For example in his article "Things You Should Never Do, Part 1", which says that we should never start over from scratch. He gives an example how Netscape lost its market share when netscape decided to ditch their code and started from scratch. Some of his example also includes from big companies like, Microsoft, Sun, and IBM.
His articles include best practices in software development, team management, software product management, and software company management. In this article, he describes 12 questions that test a quality of a software development team. I have to say I agree with most of them, except for the last one which i think not necessary. He mentions the importance of functional specifications in this articles, couldn't agree more with this. He also has a business strategy related articles for software development company. Since its always been my dream to run my own software company, I enjoy the article a lot.
Moreover, He has articles that describes his philosophy in building his company. He has done a great job in creating a very desirable work place, which he describes here. If you're lazy to read up his post, you can just see the pictures of it here. It amazes me how he's willing to invest so much on it (considering that his company is not even classified as med-size company), and that he does it for the sake of his programmers. I honestly thought only google does this.
His blog is by far the best blog I've ever read. Not even one article I've read was badly written. Highly recommended, adding him to my blogroll.
For the last couple of months my work has been bottlenecked by my build time. After I got my new computer, my build time using Visual Studio was pretty quick. It was a very enjoyable time to work. Until one day my build time just dropped to 45 minutes. It even reached 1 hour mark at one time. This issue affected my team mates as well. We tried to investigate the cause of it, but never seems to be able to solve the problem.
Luckily my senior came up with an intermediate solution, which is to use MSBuild through NAnt script. The intermediate solution took 6-7 minutes on average to build, which is alot better than 45 minutes. I talked to my senior that, even the alternative solution builds alot faster, but we still need to solve the build time issue. He then said to me, "We cant afford to waste more time trying to solve this, we have important work to do, we have to settle down with the alternative solution.". So I was like, he may be right, and I asked my self how bad could it be to have a 6 minutes build time.
Well, It turns out, it's pretty bad. In the first few weeks, I could start feeling the effect of it. Most of my time was wasted to wait for my solution to build, Work became so boring. It was very hard for me to focus and get itno "The Zone". Even when I got into the zone, my 6 minutes build time would easily break it. Why? because I was doing some other things during that waiting time, things like: reading blogs, listening to podcasts, watching pdc videos and whatnot. You may think that I was bludging, but I was just trying to utilize my time rather than wasting it.
Lets just do the math for a sec now. I have a 6 minutes build time. Lets just say I do 10 builds a day at minimum. Since one build takes 6 minutes, that would mean I would waste 1 hour of my work time in a day. Now that's just the minimum, and there's no way software developers do just 10 builds in a day. They usually do more, alot more. Especially in my case, where I have to use loggings to debug my application. I honestly could say that this issue had been taking out a good 2-3 hours of my work time per day.
This build issue had worn me out, physically and mentally. Especially when time was a constraint, I felt bad when I was not able to deliver a solution on time because of this. And what made it worse is that my team mate seems to not having any problem with it. Wait, did i say my team mate doesnt seem to have any problem with the build issue? Yes he doesn't and never was. It turns out that he has a hacked way to tackle the problem that makes his build time is uber fast. It involves changing the project references of the project that we want to build to just reference the dlls. So I tried his way, I grabbed his project file and chucked it into my solution. BAM, my build time went down to less than 30 seconds!! Man, to be honest I couldn't be happier in my life. All of the sudden my work becomes less dull and more exciting. I feel that I've gained my speed back. Its Awesome!!
Based on my experience, all I can say is: do not underestimate your build time. Long build time can seriously hurt your performance and demoralize you in a great way. I would consider any build that takes longer than 2 minutes is a problem. If you have a build problem fix it right away, its not worth your time and energy to work with long build time. Faster build time = Better work efficiency.
On some unknown cases, opening files/documents using .NET Process class may have a performance impact. I hit this issue a while back, and it took a good few seconds for the .NET process to open up a Word document. I had a similar code as below when the issue occured.
After some research, I found out that this can be remedied by: instead of setting the ProcessInfo.Filename to be the filepath, you set the property to be the path of the designated application to open the file, and specify the file that you would like to open as its parameters (As shown below).
The problem with this method is that, you need to do this for every file type that you would like to support. So if your application supports multiple file format, you need to support each and every one of them in your code. Another problem is that, you need to find a smart way in detecting the default application that was set by the user to open a file. For example: if you want to open a word document, you can't always assume that the user has Microsoft Word installed in his machine. What happened if the user has OpenOffice instead? Or another case is, If the user has both MS Office and OpenOffice installed, but the user has opted to use open office as default application. Surely the user would prefer to use OpenOffice instead of MS Office. Thus we cannot just hardcoding value in.
Another solution to the problem is to use ShellExecute method of shell32.dll. One way you can do this is by defining an external method for the ShellExecute, and then calling it from your code. Another way of doing this is to set the Process object to do a shell execute. You can do this by setting the UseShellExecute property, of your Process' StartInfo object, to true.
So if you ever hit a performance issue when opening documents using .NET process class, give the above soulutions a try.
To my definition, a Namespace is a container/place holder that defines a piece of logical unit in a software system. At the very basic, Im following this namespace naming convention of: CompanyName | ProductName | [TechnologyName] | SoftwareLayer. Technology name is something optional, thus im putting it inside a square bracket. The software layer part is what usually defines the kind of logics/classes that would go to the Namespace. For example:
- RWendi.Foo.BLL: this is where I put the business logic layer of the Foo application
- RWendi.Foo.Blob.DAL: this is where I put the data access logic layer of the blob component, of the Foo application
- RWendi.Foo.Utility: this is where I put all the utility classes of the Foo application. Utility is not necessarily a software layer, but it is a self contained logical piece of the software, thus warrants a namespace on its own.
The fact that we have software layer at the end of the namespace, makes it easier to decide what logics need to be put into the namespace. but what happens if its the main namespace? What kind of logic would you put inside the RWendi.Foo namespace?
The way that I use the main namespace is usually as place where I put all the things that are related to the application in general, but not related to any of the software layer. Things like initialization logic, entry point logic, application wide events, and enumerations. Lately I had a side project to develop an API, and one of the thing that I realised is that you can not apply the same rule to APIs. An API doesn't necessarilly have an entry point, initialization logic, or even an event. I had problem figuring it out on how should I utilize the main Namespace of the API, what should I put inside the main Namespace of the API?
I then realised that the API main namespace should realy contain the API logic itself. If I think about it, "An API IS a business logic layer". If an API purpose is to provide an interface to another program, then the main API namespace should contain the classes necesssary to make a request to the API's program. If an API is responsible to provide some business logics, then the API main namespace should contain those business logic classes. Having accustomed to seperating business logic classes to another namespace realy blinded me this time, because it really doesnt make sense for an API to delegate the business logics classes to another namespace.
One of the things that I've been trying to do lately to improve myself as a software developer is to apply TDD (Test Driven Development) to my day to day development work.
A little bit introductions on TDD, TDD is a Software Development methodology where test cases are written first to accommodate Software requirements/specs and then Codes are written afterwards to fulfill the tests. The whole idea is to let your tests dictate or drive your software development, i.e. "Your codes are born out of your tests". Codes that are written using TDD will be much simpler in general, because they are written just to pass your test cases. One of the problem that might arise when you write your code first is that you tend to get carried away and overcoding your solution. In some cases overcoding your solution can be a bad thing, because overcoding means you over-complicate things, things that are not necessarily have to be be implemented to fulfill your requirements.
The basic iterations of TDD is as follows:
- Write your test case
- Run your tests (the new one should fail)
- Write your code
- Run your tests (every test must passed)
What Im hoping to achieve by implementing TDD is that, I want to overcome my habit of overthinking something (as I mentioned here). TDD should help me to set my mind to focus more on whats really important, which is to implement a solution that targets the required specification, nothing more and nothing less. Just as Rob Connery said in his MVC Storefront series, "TDD helps you to always challenge your assumption. Don't ever assume the code that you're writting is going to be needed unless if you test it first."
My first experience with TDD is a mix of good and bad. One of the first thing that I feel when doing TDD is that it feels weird to write my test case first. It's really hard to have the mentality to write the test case first, as I am very much used to just code away and worry about testing later. I have to admit that its not as easy as it sounds. Due to this I was not able to be as productive as I could be. Moreover, the fact that my company doesn't use any unit testing framework at all doesn't help for a bit. In fact I had trouble using VS2008 Unit Testing tool, it gives me compile error due to some project reference located in a network path. Couldn't be bother to fix it, I tried NUnit and it works like a charm.
Having said all that, after a few of TDD iterations Im starting to feel the benefit of TDD. Everything that TDD tries to achieve starting to make sense. From writing your test based on your requirements, to write your code to pass the test, and to produce another test (one test leads to another). Feels like it gives some rhythm to the way you write your solution. Unfortunately due to time constraint, I had to cheat my way to finish up my solution. I guess I still need some times to get used to it before I can really be productive again when using TDD. Anyway, I think the whole experience has been very positive and I will definitely keep on trying to apply TDD on my development work.
These past few days I had a couple of shocking discoveries in some of our libraries. Not sure if there's an official term for it (maybe code smells?), but basically these are my discoveries:
- A view class is being decorated with methods that belongs to a business logic class. Some of these methods provide validation logic, while some others are doing business logics inside the View class.
- A business object class has methods that takes in a view object as one of its parameters
To me these are just not right at all. In the first example, the view class should not be doing any complex processing that relates to business object. If a view class contains business logics, that would mean I have to use the view class to reuse those business logics. Classes that belong in the view layer are responsible solely to display or update the view based on a given model. Any processing logic that doesnt involve in displaying or updating the view, let alone doing stuffs to the business object, should be delegated to the business logic layer. The only exception to this is when the interaction between the view and the business object is simple enough. Simple operations like setting properties of the business object that requires no business logic at all.
In the second example, the business object class should know nothing about the view class. It doesnt make sense to pass your GUI to your business object, What the hell is it going to do with a GUI anyway? A business object should not directly change a view state in anyway. Im a pure believer that classes in the business object layer should be as light as possible. Business object layer has a sole purpose to represent your model in memory and perform CRUD operation to the data layer. If your business object is doing logics other than CRUD operations, those logics should be refactored out to the business logic layer.
Speaking of software layers here is the basic view of software layer stack to my understanding:
Idealy, the responsibility of each layer is as follows:
- A View class is responsible to initialize, update, and displaying GUI to the user. Any communication between a view and a model should utilize a controller class to do the job. As stated above, the only exception here is when the interaction between a view to a business object is simple enough (requires no business logic).
- A Controller class is reposible in controlling actions. It decides what action to take when there is a change in the model OR when there is an interaction between the user with the view. This may be updating a view or executing a business logic.
- Business Logic is where you put all your domain's or model's logic.
- Business Object represents Your domain model in memory as an object. It may communicate directly to the data source or it may utilize an entity class to abstract data source.
- Entity is an object representation of your data source. Why not use business object instead? If you use your business object, you are essentially coupling your business object to your data. This means if the datasource change you're screwed. Moreover, you don't want to make your business object class dirty with all those data mappings logic.
- Data is your data source.
These layers should be implemented as loosely coupled as possible to achieve better design for greater reusability and flexibility. By doing the examples aboves you are basically coupling some of these layers together, because its doing more than it should. Any class should only has only one responsibility. If a class has more than one responsibility, then you should consider refactoring it.
I have to say that it was a great event, i had a blast. The atmosphere was great, the food was great (pizza, beer and softdrinks), people were so keen and enthusiastic, and a great meeting place to be.
There was a bit of technical glitch at the start of the event that left many of the atendees stranded on the ground floor for like few minutes, because level 8 is not accessible after 18.00. I actually was in the group that went up and down inside the lift for like 2-3 times, hoping that someone had the swipe card to level 8. The other people on the ground floor got mad, because the only lift that worked was full of us. It was a fun experience. Anyway we started a bit late, Richard gave a couple of news of what has been going on in the development world, and after that we had our meal followed by the presentations.
The first presentation was about Ruby on Rails presented by James Crisp. The presentation was top notch, he delivered it perfectly. It was brief, systematic, and very informative, I liked it alot. I have never tried ruby before nor have i seen one, but after the presentation, it makes want to try it. It looks like a very fun language to learn, definately putting this in my TODO list.
The second presentation was about Rhino Mocks presented by Richard Banks. His presentation was more like giving ideas of what you can do with mocking. He didn't give no background whatsoever about mocking and reasons why the concept is there in TDD or why we should even bother of using it in our testings. In my opinion it would be nice to have a little bit of introduction about mocking, just to give a bit of an idea to those who knows little about mocking like me. Having said that, Richard delivered a great presentation as well. After seeing examples of mocking, I asked my self "dang, why didn't i know about this before?". Mocking is such a great way to isolate your testings. Who cares bout other classes, I just want to test this bit and just mocks the other. Easy, problem solved!! Funny I must say, Richard Banks attitude towards software development reminds me a lot of my Senior Architect Paul Doessel.
Another great thing about the event is that, not only the presentations were very usefull, but the interaction and the atmosphere in the meeting room was very positive. People were collaborating, the presenters were not the only who gave answers, but there were many others that show as much as expertise as the presenters. There was no doubt, lots of people with great minds attended the meeting.
Overall, I think the event is a big success. I definately looking forward to attend next month's meeting, especially they're going to talk about my topic ORM (NHibernate in particular).
Sydney ALT .NET group will have their first meeting tomorow 30 September 2008. Basically, if you're the kind of developer who's eager to know other alternative technology than .NET, you're most welcome to the group. You can find the official definiton of the group here. I can't wait to attend the meeting, as Im sure I will learn a lot from the meeting and meet people with great minds. I'll definately make a follow up posts in regards to the event.
ALT .NET Meeting (website)
Level 8, 51 Pitt Street
Sydney NSW 2000 Australia
Tuesday, September 30th
6pm - 8pm
Agenda For The Night:
6:00 pm - Meet & Greet time and then Kick Off!
6:30 pm - "Ruby, Rails and IronRuby from a .NET perspective". Presented by James Crisp
7:00 pm - Munchies & Drinkie
7:30 pm - "Mocking with Rhino Mocks 3.5". Presented by Richard Banks.
8:00 pm - Wrap up & go home.
"Every computing problem can be solved by adding another layer of interaction" - Scott Hanselman.
I just had to post this... Such an awesome quote... :)
Stored procedure has been known to have performance advantages in comparison to inline sql statement, such as:
- Pre-parsed/pre-compiled SQL statement: also cached for sub-sequent use.
- Pre-generated Query Execution Plan. Query Execution Plan is a set of steps that is used by the database to execute sql statements.
- Reduced Network traffic: SQL statements can be executed in batches.
- Improved security: user can be given permision to execute a stored procedure without having any permission on the underlying data/table.
For all we know DBAs love stored procedure and preffer to use them instead of inline sql statement, but do we really gain much from using stored procedure? Developers may not have the maximum productivity when developing application driven based of stored procedure. This is due to the fact that it's not that easy working with stored procedure. They're painfully hard to debug with those non-descpritive error message, and the fact that they tend to hide business logic makes application to be less consistent.
Having said that, placing your business logic in the database is not entirely a bad idea, especially in a case where your database is used by more than one application. To have a centralized place for your business logic means that there is only one code base to be maintained/managed. Changes in business logic will need to be applied only to the stored procedure, not to every application that talks to the database. This case of course does not apply to database that is used by only one application.
There are also arguments saying that with modern databases, performance advantages that stored procedure has may be negligible. This is right, but not always. Yes it may be negligible, but only for simple queries. For larger and more complicated queries, the performance difference may substantially adds up (time to parse/compile sql statements, and generating the execution plan). More complicated queries may take as slow as few seconds to build their execution plan, when stored procedure is used this will be cached for sub-sequent use.
In conclusion, using stored procedure is definitely a good practice, especially with those performance benefits. But for simple queries, it may just be sufficient to use inline SQL statement, as those performance benefits will be negiligble in modern database. Who knows years from now, maybe stored procedure would be completely redundant. Moore's Law lives on!!
As a software engineer, I keep on doing my bad habit of "Thinking too much"!! Yes, too much thinking can do you no good. Too much thinking may lead you to forget the most important thing, which is "To do something".
My initial step in doing something is to think and plan on how to go about of doing it. Once I've got a potential solution, it gets more complicated from there. These questions are what usually come to my mind:
- Is this the right/best solution?
- Is the solution general enough?
- Is there anything that might break the solution?
- Is there anything the solution might break?
- Is there any better solution?
This train of thought is usually my standard train of thought. On more complicated issue, more questions usually arises in my head. One leads to another, another leads to other ones. It just keeps going in an infinite loop.
It's not necesarilly a bad thing to have this kind of habit. Afterall by thinking you may have a bit more assurance that you're doing it the right way, not to mention you will be more prepared to anticipate any issue that might arrise surrounding your solution. The thing is, you just won't know the exact answer to these question without having to try it.
In Short, by thinking you may gauge but the real answer lies when you start implementing your solution. Thus every time my infinite loop of train of thought has started to run, I would put a big break point that says "Just Do It!!".