Hi, Bill -- what I'd recommend paying more attention to is the method by which the margins of error were estimated. Bottom line, I'd recommend using the replicate weights in stats packages to do your significance tests. It'll be more accurate, and it saves you some manual work comparing standard errors yourself. :)
Longer version:
[Disclaimer: I'm not a statistician, so I'd appreciate whatever clarifications/corrections others can offer.]
If the MOEs are the ones that come out of the usual stats programs (assuming a simple random sample), then the MOEs above are probably too small, because the ACS uses a complex sample design. With PUMS data, it's best to use the replicate weights, which account for that complex sample design (by drawing 80 subsamples from the full PUMS, calculating statistics for each, and looking at the variation in the statistic across those 80 "sample replicates"). For more info, see the IPUMS-USA page on replicate weights or the "Approximating Standard Errors with Replicate Weights" section in the Census Bureau's PUMS accuracy document.
If the MOEs are replicate-based, then the formula above will probably work pretty well, with a caveat.
The actual formula for the standard error of the difference in means is the square root of (the sum of the variances minus (twice the covariance)): ( SE1^2 + SE2^2 - 2Cov(1, 2)) ^ 0.5
The formula we use with the summary files to approximate the standard error of the difference in means -- (SE1^2 + SE2^2) ^ 0.5 -- ignores the covariance between the poverty rates in the different groups.* Since the covariance can be positive or negative, this means that the formula may overestimate or underestimate the actual standard error (as the Census Bureau's instructions for statistical testing in the summary files point out.)
* - What does this covariance term mean in practice? I don't know -- perhaps how the poverty rates in each of the 80 "sample replicates" covary? Does anyone else have an explanation?