Have you ever created a virtual machine in Azure with a single NIC, only to find later that you need another network interface? I know I have!
At the time of writing there is no way to do this through the Azure Portal. There also doesn't seem to be much information on it if you're using the newer Resource Manager deployment model. Here's some notes I made to make it happen.
This involves a major reconfiguration of your virtual machine. I take no responsibility if these steps don't work or cause a problem for you!
Second network interface created in Azure Portal
Existing network security group with that network interface associated in Azure Portal
Screenshots of key settings, especially name, size, resource group, location, and disks
Shutdown virtual machine
Deleted virtual machine
I didn't need an availability set so that's not covered here.
Start with the following script, and amend it for your needs:
To keep it easy to read, I didn't use any variables except for the repetitive Resource Group name. You can find details of your disks in the storage account for the VM, under the "vhds" container.
Once you run this and log in, wait a few minutes for it to churn through. At the end you should be able to go into the Azure Portal and see your VM back again - now with another network interface.
This post gives some tips on how to improve the performance of your applications running Entity Framework (EF6 era), and access to SQL generally.
EF has been the primary ORM I've worked with since 2011. Over the last year I've needed to optimize a very data access intensive project (synchronising complex data from one system to another during peak periods) that used EF. I profiled, measured, tested, and then did it all again... Based on this experience and other research, here are some thoughts on getting the best from EF.
Keep context open only for as long as necessary
Long running contexts are a performance killer. EF already caches anything that can be reused when it starts up the first time. There is nothing you are saving here except a few lines of code, and it's not worth it - you can find yourself with slow performance, memory and locking problems!
Also watch for code that runs while your context is open but is not needed by any queries. Let's not keep that transaction running for any longer than it needs to be...
Finally, this is hopefully obvious but external resources (like databases) that are no longer needed can leak memory. Let EF know you're done with it by wrapping your database calls with the using pattern.
Optimize read-only queries
Design your data read algorithms to minimise database access. If you need to refer to the same set of data using queries that differ in minor ways, consider loading the entire set into memory first. Then query the set in memory. It may also be useful to cache the data set.
If you don't need change tracking (i.e. to perform writes), add .AsNoTracking() to your query to ensure it will not be loaded into the context. The context attachment takes some time, and so does a call to .SaveChanges() if you have a couple of queries - one that requires changes and one that does not. This is exacerbated if there are a lot of results from those queries.
Avoid loading full objects
Continuing with the read-only theme, if you only need a couple of values from each row then just Select() what you need. This instructs EF to only request those columns from SQL, therefore avoiding loading the full object as well as context tracking.
If you just need to check the existence of something and don't need to load it from the database, then... errr... don't. :-) Instead, use .Any() to check if a data row contains a particular value.
Consider disabling eager and/or lazy loading
Yes here I am recommending you to turn off some of the key features of Entity Framework. They are wonderful, but aren't always appropriate when you need full control over what gets loaded from the database and when.
Eager loading often pulls in too much at once, which apart from additional data access, also means additional work for EF to manage those objects during the lifetime of the context. However if your object makes no sense unless those relationships are loaded in, then eager loading could still be for you.
Lazy loading can fire off additional access to the database when you materialise your query. This can be avoided with lazy loading off by simply loading those objects in explicitly as part of that original query with .Include().
Returning an EF query as IQueryable<T> from a method is handy because it lets you build on the query later. However this can also bite when queries stack up and their (often unnecessary) complexity is hidden. Then there is the effect of looping through an IQueryable, which can cause EF to execute a separate query hit to SQL Server for each iteration.
Instead centralise your query to a single method and use ToList() to return exactly what you need.
Speed up bulk adds
If you are simply adding a number of objects then saving them, you may not be aware that EF will check whether each of those new objects has changed. To avoid this, turn off AutoDetectChangesEnabled before adding, then back on again afterwards. It does exactly what its name suggests.
There is also a community contribution that provides support for SQL bulk inserts using SqlBulkCopy called EntityFramework.BulkInsert. This works really well but be aware that the version at time of writing (188.8.131.52) is not thread safe so you will need to wrap the call to BulkInsert() in a lock statement.
The only way to solve this is to issue a good ol' parameterised SQL query yourself. You can run this through EF so it is wrapped in a transaction. Besure to use Keep in mind that you are bypassing any tracking EF may be doing of these objects (directly or as relationships).
Make the most of available tooling
When trying to extract performance out of any application of reasonable complexity, good tooling is necessary. Here's some recommendations.
Use LINQPad for any medium to advanced queries. It's worth writing them here first. You can see the various types of syntax (including SQL) that will be generated and it is super powerful!
Entity Framework can seem like a black box at times. For slow queries you may like to run SQL Profiler (or alternative) to see exactly how your LINQ query has been transformed to SQL. Or maybe what you thought would amount to a single SELECT has multiplied into thousands. This is the only way you will know.
Every major release of EF contains performance improvements. Unless you have issues with dependencies on other libraries (e.g. the deprecated WCF Data Services requires EF5) it should be fairly simple and quick to update to the latest version of EF. For critical production systems, factor in some testing time just in case.
I'm working on a particularly database intensive application at the moment. We use Entity Framework code first for our back-end database reads and writes, and have a WCF Data Services wrapper for REST read access from the UI.
Over the last few weeks as load has increased I've started noticing the occasional SQL deadlock and timeout on production. Suddenly yesterday we reached some sort of tipping point and were inundated with timeouts. The cause of most of these was found just by running SQL Profiler and watching some unnecessary queries spill down the screen (multiple functions inside a view, bad idea). Once the chaos was largely over, I started looking at whether our database indexes were actually useful and how to verify this with EF and WCF DS.
Configuring SQL Profiler for Entity Framework
These are the SQL Profiler settings I found worked best to capture both EF and WCF DS queries (with Show All Columns ticked):
This combination allows you to see CPU time and how long statements took, as well as then letting you load the trace into Database Tuning Advisor (DTA). If you want to check for timeouts and deadlocks as well, make sure you include:
Lock:Timeout (timeout > 0)
Back to the Database Tuning Advisor, it specifically requires SQL:BatchCompleted and SP:StmtCompleted. The details of how to get that running is on MSDN. Set the SQL Profiler trace to output to a file (if you want to analyse on your local machine) or database table. Let it run for a few hours while the tasks you want to optimise are being performed by your users. If you're game you can run this on production for more accurate results, but a test environment is preferred provided you can recreate a typical load.
Analyzing SQL Profiler results in the Database Tuning Advisor
I found the most useful things from the DTA are the SQL script of recommendations in combination with the "Index usage report (recommended)". These will get you 90% of the way to some quick optimizations. After running the DTA, save the SQL script from the Actions / Save Recommendations menu. Then switch over to the Reports tab, choose the report just mentioned, and then right-click on the table of results and click "Export To File" (this feature is not obvious!).
Copy both files over to your local machine. Open the script in a text editor, and use Excel to open the XML report. Sort by "PercentUsage" and look at how the indexes you already have compare with the DTA recommendations (that start with _dta_index). You can easily find its recommended indexes in the SQL script and adapt them to your needs - just give them good names! Also consider dropping the indexes that don't show up at all as each database write usually causes an update of the index which can take time and cause further timeout issues.
I've worked on numerous InfoPath projects over the last two years, covering each version – 2003, 2007 and 2010. As time has gone on, I've started to develop a twitch whenever someone mentions "InfoPath". Now to be clear, the product has its place and there are times when it is appropriate for use. However there are also things to be very cautious of, even to the point where alternatives should be considered such as custom development.
I've tried to tone down this post, but there is a fair bit of frustration here that is hard to eradicate! Here's my top list of gotchas.
Business logic is hidden
The default way that InfoPath names its data structures as a user creates them looks like this: field23 or group22. The UI never encourages the user to give fields names that describe their purpose. You could argue it actively discourages it by introducing the friction of of a dialog before a field can be renamed. It would be nice if a field was in "rename mode" when a control associated with it was first dragged onto the form!
To add behaviour to a form, the user attaches "rules" to fields. This is an n rules to 1 field relationship. As it's not unusual to complete the design of a form with up to 50 or even many more fields, many of these rules make up the end result, spread throughout the form. This causes serious maintenance difficulties as the only good UI for finding and working on these rules is on a per-field basis. There is a Logic Inspector "feature" provided with InfoPath but its UI is very poorly designed and there's no way to relate rules to each other. This makes it easy to forget what a rule is there for.
Using InfoPath seems like such an easy solution. In just a few clicks you can have a very powerful form without thinking about it! And therein lies the rub. I'd argue that many people don't put in the effort to properly think through the design of a form, and wind up with unwieldy beasts of complexity. InfoPath provides no assistance in making a form self-documenting and expects you to remember what field23 is for, or Rule 5. Over time we can end up with no-one really knowing how the thing is supposed to work, and forced to unravel a web of rules and fields to find out.
The loaded gun feature - custom code
The most innocent of requirements can require custom code and sometimes aren't discovered until later down the track once a form has been largely developed and you're locked in to the InfoPath platform. The problem with code is that it requires a developer, adds another level of complexity, and changes the deployment method to SharePoint entirely (requiring involvement from the IT department's SharePoint administrator). Adding code is something that even Microsoft don't recommend:
Create InfoPath forms with as little code as possible, and put the bulk of your code into middle tier services and workflows.
If you're adding code anyway, think about why you are using InfoPath and not developing a custom solution. If your deployment target is SharePoint, newer features such as LINQ to SharePoint are very easy to implement.
Do you prefer a messy or broken form?
If you have developed a form and decide one part of it has to be deleted or replaced with something else then you have two options:
You can leave the old fields and groups unused in the underlying data structure but not show them. This means over time you'll have leftover cruft with a purpose that no-one can remember.
You can delete them and break the ability to open any older forms. Removing aspects of the data structure is a strict no-no and InfoPath cannot handle this situation.
The multiple environment nightmare
InfoPath loves to hard-code references to everything, particularly data sources, so if you want to follow a best practice process of deploying to a test environment first, you're asking for more maintenance headaches. Some of the issues are alleviated by deploying a form as a feature that can dynamically correct these references but this requires Visual Studio and locks you into this method of deployment for the life of the form.
It's also possible to hit problems with InfoPath's caching. This tries to be helpful but sometimes can be overzealous or just plain gets confused between different environments. Unfortunately it can result in your testers using the test version of a form in production before the update has been deployed!
Too many bugs
You're never gonna know when one's going to bite you and cost serious time, again and again and again. There are bugs and unexpected behaviour in the client, when deploying forms with code, and in Forms Services. Pretty embarrassing when you think you have a form "just right" and it breaks in front of your demonstration audience.
The InfoPath sales pitch
Microsoft's selling point is that InfoPath is part of Office and any savvy business user can maintain these forms. However this greatly depends on the complexity of the form and the user's expertise. Many people struggle as forms get more complicated for the reasons mentioned here (plus more), and introducing multiple environments or custom code makes it nigh impossible.
As mentioned earlier, InfoPath has its place. Make sure your reasons for using it are the right ones.
I could go on! But hopefully this gives you some things to think about before falling into the InfoPath deep end. InfoPath has its place for simpler solutions and usually doesn't require a developer resource. Just be careful not to let it cost you dearly later. Fingers crossed that Microsoft will change some fundamental flaws the product has had for a while now in the next version.