SQL Server Performance

Xtreme Data driven applications

Discussion in 'Non-Transact SQL Developer Performance Tuning' started by JLDominguez, Sep 20, 2005.

  1. JLDominguez New Member

    I have a good one for you.

    I wanted to toss this out and get your feelings about a particular development paradigm some of our developers are pursuing. The concept involves a totally data driven application. All the particulars about the application are stored in tables in the database-this includes C# code as well. Even parameters to build SQL statements dynamically will be stored in the database. Therefore, before you query the data you have to query the database to find out what data needs to be retrieved. Without sounding like an old fart, I don't recall ever working with an application which didn't require some type of maintenance to add or modify functionality. I'm opposed to this for the obvious reasons. Please share any experiences good or bad with this type of development methodology.
    - Thx in advance
  2. Argyle New Member

    It depends on how the application is built. Has the application been tested in any large scale environments with heavy load? If done correctly it could work but hard to say without seeing the application. You might get problems trying to tune such an application via indexes etc but again it depends.

    We once had an application that tried to bring object oriented thinking straight into the database. One table existed with the columns id, name and value where value could be another id. With 10GB of self references it was a nightmare.
  3. Adriaan New Member

    Why not use a true middle-tier application?
  4. satya Moderator

  5. JLDominguez New Member

    quote:Originally posted by Adriaan

    Why not use a true middle-tier application?

    This app will be using merge replication with some end users being in a disconnected mode when they go out to the field.
  6. JLDominguez New Member

    quote:Originally posted by Argyle

    It depends on how the application is built. Has the application been tested in any large scale environments with heavy load? If done correctly it could work but hard to say without seeing the application. You might get problems trying to tune such an application via indexes etc but again it depends.

    We once had an application that tried to bring object oriented thinking straight into the database. One table existed with the columns id, name and value where value could be another id. With 10GB of self references it was a nightmare.

    I'm also concerned with performance. Right now it's in the design phase. So far I've seen about 9 tables which store all sorts of parameters. I've expressed the difficulty in tuning data stored this way. I've also stressed the fact that adequate stress/load testing will be required. This is not a very complex application but it does store large amounts of data so it can grow to be large. I can understand storing some parameters and settings. However, to store C# code and T-SQL commands and parameters seems extreme to me. In addition, storing C# code in tables implies having to compile this code at run time. Not to mention that queries will not be able to be tuned. All this information will have to be managed and properly documented which almost never happens. I also see increased network traffic just to load and run the application. There is just so many things that I see wrong with this approach.
  7. Adriaan New Member

    quote:Originally posted by JLDominguez


    quote:Originally posted by Adriaan

    Why not use a true middle-tier application?

    This app will be using merge replication with some end users being in a disconnected mode when they go out to the field.
    Why not a web-based application, like in ASP.NET?
  8. JLDominguez New Member

    quote:Originally posted by Adriaan


    quote:Originally posted by JLDominguez


    quote:Originally posted by Adriaan

    Why not use a true middle-tier application?

    This app will be using merge replication with some end users being in a disconnected mode when they go out to the field.
    Why not a web-based application, like in ASP.NET?

    Some users will be using the app in a disconnected mode - i.e. completely disconnected from the network. They will need the client and a replicated copy of the database on their local machines that will be merged with the publisher once reconnected. I don't think that can be done with ASP.NET

  9. Adriaan New Member

    With ASP.NET the client app runs in a web browser, and the web server connects to the database and returns the requested information to the client app. So all the user needs is a connection to the internet, and a login to the web server. Of course you need to set up the web application in a safe way, but why go through the hassle of replication? not to mention the time involved to synchronize!
  10. JLDominguez New Member

    quote:Originally posted by Adriaan

    With ASP.NET the client app runs in a web browser, and the web server connects to the database and returns the requested information to the client app. So all the user needs is a connection to the internet, and a login to the web server. Of course you need to set up the web application in a safe way, but why go through the hassle of replication? not to mention the time involved to synchronize!

    That is the problem. User will not have access to the internet. They are completely disconnected. They would have to have wireless access to the internet. When I say they travel to the field I mean it literally. I doubt there is any type of wireless access to the internet where these folks travel to.
  11. Adriaan New Member

    In that case I would use the most robust kind of system imaginable, just plain old tables, relationships, stored procedures. Otherwise this will soon become a maintenance nightmare.
  12. mmarovic Active Member

    Why don't they use version control system for executables? That way when they are connected replication updates database and at the same time they use vcs update to update their executables and configuration files.
  13. JLDominguez New Member

    quote:Originally posted by mmarovic

    Why don't they use version control system for executables? That way when they are connected replication updates database and at the same time they use vcs update to update their executables and configuration files.

    I've suggested that.-Thx
  14. druer New Member

    I'm not exactly sure what the goal of saving some of the actual application in the database would be, but after developing application code for over 20 years in a slew of environments I'll hazard a guess and you can see if this is the situation or not. I would guess that they are storing the app code/query parameters so that the application can be customized on a per user basis. So there would be a block of default code that the application would start with and User A then asks for some customizations and then the standard block of code at runtime or by developers is modifed to include User A's requests and then that code is stored. When User A logs in next time the application behaves differently based on the requests. Or perhaps it is more general on "role"/"user group" basis.

    I've never implemented the application code storage in the database, because the same can be accomplished without doing such. However, I have implemented the concept of storing parameters for queries in the database. To ease your mind a slight bit, that data can be loaded a single time during application startup or at logon or something, and it won't have to be hit over and over as the application continues to run. The goal is to actually help improve performance to some degree, and provide the customizations that can be very helpful. For example User A may wish to see entirely different Account data from User B. By storing the fields that each wish to see (or should see based on security) a dynamic SQL clause is constructed and only those fields are selected and then returned. Performance savings come in when users limit the number of fields that are brought back and so only a fraction of the potential amount of data have to actually be transferred. However the WHERE clause is the important part of the performance on the server obviously, providing that they construct the clauses to use parameters and don't write out strings like "where account_id = 5 and date < some_date_value" which will cause the server to have to recompile each time your server should perform fine. Because when the same user fires a second request it should be with the same parameters expected for return and the same parameters for the where clause and the server should/could still have it cached and already opimized.

    In terms of documentation storing the code/parameters in the database actually provides you with the chance to enforce better documentation, because an audit can be kept of every modification to those tables either through triggers or by imposing a versioning system somehow on the tables themselves. The other beauty that storing this in the database is that the developers can utilize TEMPORALITY to store changes that shouldn't take affect until a certain date/time in the future. So instead of rolling out new versions of the application next March, they can make the code changes, and store them in the database ahead of time. When the applications makes its query to get the "code"/"parameter information" it pulls the current set of "stuff", but next March 12'th when it pulls the "stuff" it will pull the next version of code automatically and you never had to touch any client machines. Gotta love that release model. (Temporality is just a word that expresses the concept of having a "this row is valid from this date until this date" methodology.) If March 12'th rolls around and things suddenly start falling apart at the seems, then you run a simple Update command to change the "when is this row of data valid" settings so that the previous version doesn't expire until April or something, and the new stuff shouldn't start until April and give them a chance to correct the problem but everything then reverts to the previous working version. Or you can implement a really slick user based versioning whereby a limited set of users gets access to the new version, and providing it works for them, everyone else gets rolled into it. That kind of stuff.

    Of course all my ideas could be pie in the sky, and your developers are only doing it because they can and want to tell all of their friends they are doing something so radical an different that they are impressed and get free beers at the bar. I would probably talk to them about why they are doing it, and if they are doing for the above reasons, or for others that seem realistic, then embrace the newness and consider the fact that you then end up with complete control over the development team for once in your life. Also consider the fact that instead of a bunch of sloppy undocumented code you can help enforce versioning through well know database practices which will help the group in the long run.

    (Sorry about being long winded.)

Share This Page