Larry Garfield has an interesting set of benchmarks covering many of PHP’s magic methods. His results correspond pretty well to my own benchmarks in the area. The thing to take away is that its not necessarily the overhead of the magic methods, but rather what you do inside them. Its hard to do anything useful inside a magic method, such as __get or __call that isn’t 10 to 20 times slower than the “non-magic” solution.
There are probably more than a few naive programmers who would read results like this and start to avoid these constructs in their own code for performance reasons. These are the same kinds of people obsessing over a few single or double quotes, who eschew “slow” objects in favor of switch statements that are many times slower than the polymorphic method calls they are trying to avoid.
But, that’s not the end of the story. Larry ran his benchmarks using 2,000,000 iterations. The N really matters here. Sure, iterators are slower than arrays, but you aren’t going to be iterating over two million things. I tend to fetch my database records in lots of 25 or 50. You aren’t going to be making two million invocations of __call. But how many will you make? Under what value of N does the performance of these techniques cease to matter? Is it ten, one hundred, one thousand, or ten thousand? You may be surprised at how few calls your program actually does and how little impact it has on performance.
Paul M. Jones has a good example of what I’m talking about. There, call_user_func_array appears to be a bottleneck, but it turns out that its the function being called, htmlspecialchars, not the calling process that consumes the balance of the time. In that case, the function was “only” called 300 times. I find that order of magnitude to be fairly typical. Something to be aware of, perhaps, but not something to obsess over.