Since this is a subject I'm rather passionate about, I crafted quite a long response. I feel like it deserves a stand-alone blog entry, if only for my own reference.
The original blog is here; I suggest you read it, but entirely understand if it's too long for your taste. My response, hopefully, quotes just enough of the source that you can understand what I am responding to without needing to read the source in its entirety.


About 65 years after the fact it is still debated whether or not the atomic bombs were moral
Here's the interesting part of this: Does the cost of our continuing to question its morality weigh into the equation?

Furthermore, how do we account for ripple effects, many of which are indeterminate? For example, say the US never drops the bomb on Hiroshima. Does this increase the chances that at a later time, someone drops a bigger atomic bomb on a greater number of people? Is this a factor in our utilitarian analysis?




But now, let's go a different direction with this. Let us take a probabilistic scenario. There are 5 prisoners. One man is given a choice: Either 4 of them (chosen at random) are killed, or a coin is flipped. If the coin is tails, all are spared; if it is heads, all 5 are killed. If the man chooses the coinflip, does the morality of his decision depend on the result of the coinflip?
(we assume there is no viable third option, such as escaping)

If it does not, then we reach an important conclusion: the morality of an action cannot be judged solely by its result. This might lead us to the following perspective:
The Probabilistic Perspective: The morally correct choice is not that which gives the best outcome, but that which gives the best expected outcome. The moral decision is the one with the highest expected value, i.e. the one where the product (Probability of outcome) * (Value of outcome) is greatest.

Now we take the same scenario, except that the existence of the prisoners is not revealed to the man and he is simply given a decision whether or not to flip a coin (knowing nothing about what this decision means). Obviously, there is no superior moral choice for the man; he cannot have any idea of the effect of his decision.
This gives us the Limited Information Perspective: We must consider the information available to the subject when the decision is made. He cannot be held accountable for that which is unknowable to him when he makes the decision.


However, despite the fact that socialism is supposed to benefit everyone, their [China's, etc] standard of living and GDP per capita is lower than in America and other European nations, that have a more capitalistic economy.
Are you really going to judge that single decision in a vacuum, as if there were no other factors causing this to be the case? Let's say I rule a small country with no natural resources, no skilled workers, and almost no income at all. There may well be no decision I can make that gives me a GDP comparable to the world leaders. Does this mean whatever decision I make was a bad one?
You can only compare two outcomes straight up if the decisions are made from identical starting conditions and there are no probabilistic determinants (once you make a decision, all outcomes are fixed, even if not known immediately).

not all utilitarians will come to the same conclusion about the morality of a choice
---
Neilsen offers a solution to this dilemma. He states that in such cases in which a person knows it is too difficult to judge which decision will produce the most happiness, to stick with our intuition, as deontologists tell us to do.
I like to go a different direction here.
We are unable to comprehend all possible effects of our decisions; therefore, our answers to even the most basic real-world utilitarian problems (in which we must consider that in the real world, each decision made can have far-reaching, cascading effects, whether forseeable or not, many of which will appear entirely unrelated) could be wrong. Therefore, the correct decision in any situation is any decision made with good intentions!
This does, however, lead us to the problem of defining the term "made with good intentions". How do we qualify good intentions? Is a decision morally right if it simply is not made with evil intentions? Or are all decisions not explicitly made for well-intentioned reasons immoral? (Likely, it's somewhere in the middle.) We've simply converted the measurement problem of utilitarianism ("How do we measure the value of a decision?") into a boundary problem ("Where do we draw the line between good and bad intentions?").

How do we deal with boundary problems like this? How do we reconcile the idea that two people applying the same measurement system can come to completely different conclusions about a decision? The obvious answer is moral relativism--but of course, it's not quite that simple. What if absolute right and wrong do exist, but are impossible for us to know? As none of us are able to determine morality objectively, we cannot operate on a system of objective morality. Therefore, it is perhaps most practical to believe in absolute morality, but think as if you believed in moral relativism.