But it’s different when I work, I don’t mess about with those projects.
Since you are supposed to program deliberately, introducing technology that you aren’t familiar with would break that rule- as you add risk to a project somebody else is paying for.
The answer seems to be to get familiar with the technology, but who pays for that investment? And I call it an investment, because it might not necessarily yield positive results. You might find out (which has happened in a few projects I’ve been involved in) that the technology was pretty crap. And you might even had awesome developers you trust recommend it, there are still no guarantees. Who pays? Your boss? The customer? A split?
When I asked: Stupid Question 65: Can we expect a workplace to let us set aside time for learning? The majority of responses I got was that in many companies no time was set aside for learning, so the only way a developer would be able to introduce new technologies, and therefore maybe better solutions was to learn after work, or take a the risk during a project by persuading the person making those decisions, or not asking at all.
I can’t answer this question, it’s a hard one. We have such beautiful minds that if a project was to fall apart due to new technology being introduced that turned out to be bad choice, most of us would insist on not being wrong (it’s called cognitive dissonance). But sometimes it is. Who gets to make the decision, who pays? And how do we best approach this? Hope to get some good advice on this one- it’s a very important question for me,- and for many developers I would think.
Should you introduce new technologies in a project?
How is this best done/approached?
And who picks up the bill?