Concise and easy to use parameter types in KMDF
One of the goals of KMDF was to use clear and concise types in our parameters and structures so that their intended use was clear and there was a safe way to use them. Some of them were obvious to us at the start, others were suggested to us by our beta testers and outside community as better alternatives. Here are a few of them
BYTE (vs UCHAR)
Both have the same storage capacity, but the latter indicates a character while the former indicates an unspecified 8 bit quantity. If I had an index which could only fit into a byte, I would use the BYTE type.
UNICODE_STRING (vs PWSTR)
The former does not require a NULL terminator and, more importantly, is the string type used for all underlying WDM calls. PWSTR is too problematic in terms of guaranteeing the NULL and converting from a UNICODE_STRING to a PWSTR.
In my opinion, the missing piece of the puzzle was the lack of safe string APIs that manipulated a UNICODE_STRING, and without it, KMDF couild not use a UNICODE_STRING as its standardized string parameter. If you wanted the safe string functionality for a UNICODE_STRING, you had to treat the buffer like PWSTR, use the safe string API, and then translate the results back into a UNICODE_STRING…talk about error prone code. This led me to duplicate all safe string functions (and then some since any function which took a string as a source parameter needed a version which took a PWSTR and a PUNICODE_STRING version) in ntstrsafe.h and include these changes in the Server SP1 DDK and WDK.
ULONGLONG (vs ULARGE_INTEGER (or their signed equivalents))
This one was so simple to do once it was pointed out to the team (thanks to Don Burns for the suggestion during the beta!). ULARGE_INTEGER was created when NT was initially being developed because there was no support in the compiler for 64 bit values. Support for 64 bit values has been in the compiler for a long time so exposing the native compiler types made more sense then using a legacy type.
Enumerants (vs #defines)
I wrote about this before and I think that post goes into greater depth then I can here. What it boils down to is that I feel that enumerants provide some type and range safety that a #define does not and can prevent simple mistakes.
Comments
Anonymous
January 22, 2007
> If I had an index which could only fit into a byte, I would use the BYTE type. I believe that's a job for the CCHAR typedef > Enumerants (vs #defines) Don't forget debugger integration! IMO the only "problem" with enums is the unpredictable and non-portable bit width they get, but usually you can get away with forcing a minimum width with a bogus value like 0xFFFFFFFFAnonymous
January 22, 2007
The comment has been removed