One of the things I spend most of my time on as an applied mathematician is implementation. In other words, I program computers, ranging from simple macros in Excel to scripting in MATLAB to compiler languages such as C (or C++) and FORTRAN to, my favorite, parallel implementations in GRID environments. The most challenging part is always the debugging phase. That’s where, after I’ve written the code, I figure out why it isn’t working. Inte estingly, over Christmas I learned the history of the word for *debug*. There is evidence that the old story of a moth becoming trapped in the US Navys Harvard Mark II computer is NOT the origin of the term computer “bug”. See here, http://www.worldwidewords.org/qa/qa-bug1.htm

At any rate I’ve had two debugging problems today. One is solved the other is a thorn in the flesh. (For some reason that is the third time today I’ve used that phrase).

1. Maple integration problem: A simple problem was brought to me be a Chemist in our Division: [tex]displaystyle int_0^{alpha_1} frac{1}{(x-2alpha)(y-3alpha)} ; dalpha[/tex] He was not interested in simply using partial fractions to solve, but wanted to have his students quickly integrate this in Maple. Of course the each command would be “`int(1/(x-2*alpha)/(y-3*alpha),alpha=0..alpha_1);`

” However, the fact that nothing is known about the constants [tex]x[/tex] and [tex]y[/tex] and their relationship to [tex]alpha_1[/tex], give Maple a headache. There are commands in Maple to assume relationships between variables. We could simply use “`assume(x-2*alpha>0)`

” and “`assume(y-3*alpha>0)`

“, but we also need that x and y are positive. We tried many things still returning errors. Finally, I found the mistake. If you call the assume command more than once for the same variable then it erases the first assumption. You must call the assume command with a list of assumptions about the same variable, i.e., “`assume(x-2*alpha>0,x>0)`

” Problem solved, woohoo!

2. Access violation: Here’s the one that’s still plaguing me. A research student of mine is working on a problem that we derived where we want to use bootstrapping to approximate a distribution for fund price returns and portfolio returns in order to determine the Value at Risk (1st or 2nd percentile of monthly returns), then use that value at risk of the entire portfolio as an objective function to minimize. Thus we can choose the funds to be included in our portfolio in such a way to minimize our Value at Risk. Yada yada yada, our code for nonlinear optimization, LSGRG2, is crashing with our new objective function, ARGHHH! We’ve tried everything we can think of and no dice.

Now, I must go home and leave my work here. That’s one of my resolutions, so here it goes!

Hey Dr. Franklin!! Cool page! Thanks for figuring out how to fix that problem. I have been looking at it all day. It was starting to get annoying. See ya tomorrow morning.

LikeLike

Thanks.

LikeLike

I hate it when my codes and functions collide. Other than that, I have no idea what you are talking about. But it sounds darn cool.

LikeLike