none
Best practices – using Nullable data types on class properties that correspond to entities primary keys RRS feed

  • General discussion

  • I have 3 libraries:

    • Business objects designed to communicate with the Data Access Layer.  Every class has a property (of type long) that corresponds to the primary key of the database entity. 

    • A middle tier business logic layer that will communicate those business objects back to the DAL

    • A Data Transfer Objects (DTOs) library that communicates to the UI.  The business logic layer validates the primary key property on the DTOs for zero values prior to viewing or updating or deleting. 

    One programmer (who is more experienced with Database development) complains that this is equivalent to “NULL is not zero”.  He wants the Ids on all of the DTOs to be changed to Nullable but leave the Ids on the business objects (that communicate to the DAL) as long.  Another programmer thinks that validating against zeros for primary key is valid because zero is not a valid primary key for any entity.

    Does the “NULL is not zero” argument apply to this scenario?  Is it a "best practice" that all Ids on the DTOs be converted to Nullable but passed as long to the business objects after being validated for not being null?

    The standards for the application database programming have been already designed long ago and are not open for change.  The database has a user defined type: CREATE TYPE [dbo].[PKIdentifier] FROM [bigint] NOT NULL

    And every table has a primary key definition like this: [Program_ID] [dbo].[PKIdentifier] IDENTITY(1,1) NOT NULL

    In other words, all primary keys are long Identity (1,1).  Nevertheless,  in building the business logic layer, I do not want it to assume any knowledge about the data layer implementation. 

    The middle tier was written to check for zeros because the business objects (which were defined with data type long for the primary key properties) can only accept zeros for the primary key of any non-persisted entity for certain commands such as CREATE.  An integration test was also checking for non-zero value in the primary key property of a business object class as an indication for the success of the CREATE command.  The application works.   I am only concerned about best practices and code quality.

    Are tests based on checking the primary key value greater than zero good practice?  Notice that the business library was not open for modification and it has only data types of long for primary key properties.  Should the business objects be modified to allow for nullable and the DAL be modified accordingly to handle Nullable properties to maintain “best practices”?  Or there is no such “best practices” for this situation?

    Tuesday, February 19, 2013 7:03 PM

All replies

  • I don't think there is a single best-practice for this situation.  Either route seems sound in its own right.  The one thing we can do however is look at what something like Entity Framework would do.

    If you look at EF-generated entity classes, it will use Nullable(Of Long) if Null values are allowed or just Long if not.  In this case I think you are saying that Null is not a valid value, and zero is a special case value.  So that would indicate that the object model consumed by the UI layer should probably not honor null as a valid value, and should include validation to ensure 0 is only used during the appropriate object state or operation.

    Your particular architecture sounds a bit spread out but if you can think of it in terms of "data-access tier", model layer, and view layer then it might make more sense as to why you would not allow null values and would validate special-case values in the model layer.

    -EDIT-

    Very well written post by the way.  Quite refreshing to read!


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"


    Tuesday, February 19, 2013 8:53 PM
    Moderator
  • Thanks Reed for posting a well-thought response.

    I am inclined to agree with you that using the zero as a special case was adequate for this application. 

    However, the developer who designed the database with primary keys as IDENTITY(1,1) and the business objects as long (instead of Nullable<long>) has recently called the business logic layer comparisons of the primary keys to zeros as equivalent to treating Null as zero. 

    I am concerned that using the zero-check for the primary key values couples the implementation of the middle tier to the implementation of the database. For example, if the database designs were to change to allow primary keys as negative values or zeros the middle tier will fail.  That’s why I am seeking the opinions of as many people on this forum to see how others dealt with this situation.


    Tuesday, February 19, 2013 9:41 PM
  • Any data object model is likely to fail if the underlying data structure changes.

    I think the question is how do you handle the null value case?  Does this cause an exception, or do you transparently convert it to zero?

    You essentially have two object models; one tied more to the data access layer and one tied more to the UI layer.  I can see where one would argue that the object model tied to the UI layer would be responsible for the validation.  So that would mean moving the validation out of your business layer and into your transfer model.

    Another question would be which layer or model is currently responsible for all of the other user input validation?


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    Tuesday, February 19, 2013 10:01 PM
    Moderator
  • Ok wait, sorry, I think I got caught up in the comparison and missed the beef of the question.

    You are strictly concerned about the validation of zero values itself.  Not whether or not the value is null, huh?

    If the database and corresponding data object model were both updated, then I see where your current data access layer would break when it might not have to.  So what the one developer wants is for you to pass a zero to the object model and if that is wrong, let the data object model throw its internal exception or respond however it wants.

    Is that correct?  If so, I would tend to agree with him.


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    Tuesday, February 19, 2013 10:07 PM
    Moderator
  • You are correct in assuming that the database developer would rather prefer that the DAL throws an exception on a primary key value of zero and the business logic layer makes no assumptions regarding the value of the primary key. 

    Therefore all existing code checks that compare primary key values to zero should be switched to use the Nullable<T>.HasValue function.  This means that all DTO must be rewritten to allow properties corresponding to primary keys to become Nullable<long> even though the BO and the database does not allow Null values for primary keys.

    Tuesday, February 19, 2013 10:58 PM