Monthly Archives: January 2006

Research Statement

In addition to my previous post about Teaching Philosophy, I have also been working on a Research Statement. I am posting it here for me to refer back to in later months or years, so I can see how things might have changed.

My Research Interests

My primary areas of research interest lie in numerical analysis, specifically numerical solutions to ODE and PDE problems. The most recent areas of emphasis have been in the development of non-conforming finite elements utilizing multi-field methods. I have developed a three dimensional implementation of the three-field method for the partitioning of domains. It has been applied to elliptic and parabolic model problems. Additionally, I have been developing the model equations used in these techniques to include stochastic terms, primarily in the load function. However, I am interested in exploring results when other parameters in the model contain stochastic elements. I, along with current researchers in the Mathematics Department at Texas Tech University, have discussed the possibility of extending the current application to multi-scale models.

Additionally, I am interested in continuing my research in areas of scattered data fitting with B-spline functions. Currently, I am exploring the performance of a variable knot spline algorithm for data fitting which has become much more feasible with the advent of high performance computing technology. Working with researchers at Texas Tech and Arizona State University, we are comparing the performance with other spline algorithms such as “penalty splines” or “smoothing splines”.

During my time at Texas Tech University, I worked for the High Performance Computing Center, developing numerous parallel applications for the multi-processor SGI Origin 2000, as well as high-end 3D visual representations of data. Most recently, with the implementation of a computational grid, I developed applications to run in such a grid environment. I would definitely like to continue the development of high performance computing applications as they relate to areas of computational mathematics, such as the finite element method and data fitting algorithms mentioned above.

I have also been involved in interdisciplinary research with the Physical Chemistry department at Texas Tech as a postdoctoral researcher. I developed an implementation of isolating rotational and vibrational energies of a molecule by use of a rotating coordinate system. I feel that such interdisciplinary projects are central to the future of mathematical research, particularly in the area of numerical analysis. Collaboration with other departments is beneficial, not only to each individual researcher and their field, but to the preparation of students involved in those research areas as they head into industry.

Although, in my current position at a teaching university, I have not had the opportunity to obtain grant funding for such research projects as mentioned above. I have participated in grant projects as a researcher and am very interested in actively pursuing external funding for such work.

The Luhn Algorithm

Here’s a question I’ve never asked before today but now I know the answer. When you are shopping online and you enter a credit card number into a form, how do they know that it is a valid credit card number? I always imagined that they were immediately checking the number against some database of valid credit card numbers and matching it against my address. That may be true in many cases but is not necessarily so. Most credit card companies employ a method of validating that a credit card number is authentic by method of the Luhn Algorithm, sometime called the modulus 10 algorithm.

In other words, not every 13 or 16 digit number can be a credit card number. I think I had figured out just through inspection that VISA cards start with a 4, Mastercards start with a 5, and Discover cards start with a 6 but the rest of the numbers must satisfy the following test:

1. Take the 1st, 3rd, 5th, 7th, etc. digits from left to right (the odd places). Double each of those numbers. If the value ends up larger than 9, then subtract 9 from it.
2. Then add all of these together with the numbers in the even places.
3. The result must be evenly divisible by 10 (i.e., have a zero in the ones place)

EX: 3205 3211 7082 0010
The odd places are 3, 0, 3, 1, 7, 8, 0, 1 which doubles to 6, 0, 6, 2, 14, 16, 0, 1. The two double digit numbers now need to have 9 subtracted from them resulting in the list 6, 0, 6, 2, 5, 7, 0, 1.
Now we place them back in the odd places resulting in the number: 6205 6221 5072 0020.
Now we sum up these digist 6 + 2 + 0 + 5 + 6 + 2 + 2 + 1 + 5 + 0 + 7 + 2 + 0 + 0 + 2 + 0 = 40.
Thus, this would be a valid credit card number. Note, by the way, that I made that number up and it is not anyone’s credit card number.

EX: Consider the same number with the first digit changed to 2. That is, 2205 3211 7082 0010. After the doubling step we have 4205 6221 5072 0020, which sums to 38 and thus is not a multiple of 10, so it cannot be a valid credit card number.

Problem (from think again!): The number 5439 3201 3232 3209 is not valid. Change the third digit from the left to make it valid.

Debugged!!

I finally hacked it. (see previous post). During a road trip to watch the basketball teams play in OK City, I finally discovered the problem with the code where we were using a nonlinear optimizer to determine the optimal portifolio by minimizing a quantity called, “Value at Risk.” Basically, the problem boiled down the fact that the optimization algorithms require a deterministic result. We are using a gradient reduction technique to minimize the objective function. In essence, you take small steps in the direction of steepest descent, but our objective function involved a bootstrapping technique that appoximates the 1st (or k-th) percentile of portfolio returns. The appoximation was based on randomly sampling with replacement from the history of returns, then calculating the 1st (or k-th) percentile. Because of the randomness, a single choice of stock/fund distribution can produce a different Value at Risk in different iterations. With all that said, we can now correct the problem, Yeehaw!

Mathematical Blunder #5

Vector products are non-associative. Apparently a neuron misfired as I wrote out the properties for the cross product of two vectors. Fortunately, one student was on their toes and asked, “Are there more properties than are in the textbook?” To be honest, there are but the one of the ones I happened to list was not. Nevertheless, board blunders are gateways to great explorations. I had fun demonstrating the concepts with two long sticks, a stack of dry erase markers and my thumbs.

By the way, to my wife who researched the usefulness of technology in math instruction, that is a key place where technology does improve math instruction. Don’t you think a computer visual in 3d would be more effective than my thumbs? It seems like my training in OpenGL (3d computer visualization toolkit) would come in handy just for this.

21 ways to compute n!

(Reminds be of the Paul Simon Song, 50 ways to leave your lover)

Anyways, that exclamation point in the title is not intended to convey excitement on this post, but the factorial of an integer [tex]n[/tex]. Recall that [tex]n! = n(n-1)cdots (2)(1)[/tex]. I was surprised to learn there are so many ways to compute it: Fast Factorial Functions

I particularly liked the that only one I recognized was described as the “ubiqutous, stupid one.”

HT: Dr. Hahn